Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
1,700
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalRouterOn this pageRouterThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. Router chains are made up of two components:The RouterChain itself (responsible for selecting the next chain to call)destination_chains: chains that the router chain can route toIn this notebook, we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.from langchain.chains.router import MultiPromptChainfrom langchain.llms import OpenAIfrom langchain.chains import ConversationChainfrom langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatephysics_template = """You are a very smart physics professor. \You are great at answering questions about physics in a concise and easy to understand manner. \When you don't know the answer to a question you admit that you don't know.Here is a question:{input}"""math_template = """You are a very good mathematician. You are great at answering math questions. \You are so good because you are able to break down hard problems into their component parts, \answer the component parts, and then put them together to answer the broader question.Here is a question:{input}"""prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template":
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalRouterOn this pageRouterThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. Router chains are made up of two components:The RouterChain itself (responsible for selecting the next chain to call)destination_chains: chains that the router chain can route toIn this notebook, we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.from langchain.chains.router import MultiPromptChainfrom langchain.llms import OpenAIfrom langchain.chains import ConversationChainfrom langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatephysics_template = """You are a very smart physics professor. \You are great at answering questions about physics in a concise and easy to understand manner. \When you don't know the answer to a question you admit that you don't know.Here is a question:{input}"""math_template = """You are a very good mathematician. You are great at answering math questions. \You are so good because you are able to break down hard problems into their component parts, \answer the component parts, and then put them together to answer the broader question.Here is a question:{input}"""prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template":
1,701
about physics", "prompt_template": physics_template, }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template, },]llm = OpenAI()destination_chains = {}for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chaindefault_chain = ConversationChain(llm=llm, output_key="text")LLMRouterChain​This chain uses an LLM to determine how to route things.from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParserfrom langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATEdestinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]destinations_str = "\n".join(destinations)router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(),)router_chain = LLMRouterChain.from_llm(llm, router_prompt)chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body”—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. ->: about physics", "prompt_template": physics_template, }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template, },]llm = OpenAI()destination_chains = {}for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chaindefault_chain = ConversationChain(llm=llm, output_key="text")LLMRouterChain​This chain uses an LLM to determine how to route things.from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParserfrom langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATEdestinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]destinations_str = "\n".join(destinations)router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(),)router_chain = LLMRouterChain.from_llm(llm, router_prompt)chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body”—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all
1,702
atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum.print( chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'} > Finished chain. ? The answer is 43. One plus 43 is 44 which is divisible by 3.print(chain.run("What is the name of the type of cloud that rains?")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning.EmbeddingRouterChain‚ÄãThe EmbeddingRouterChain uses embeddings and similarity to route between destination chains.from langchain.chains.router.embedding_router import EmbeddingRouterChainfrom langchain.embeddings import CohereEmbeddingsfrom langchain.vectorstores import Chromanames_and_descriptions = [ ("physics", ["for questions about physics"]), ("math", ["for questions about math"]),]router_chain = EmbeddingRouterChain.from_names_and_descriptions( names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"]) Using embedded DuckDB without persistence: data will be transientchain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. ->: atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum.print( chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'} > Finished chain. ? The answer is 43. One plus 43 is 44 which is divisible by 3.print(chain.run("What is the name of the type of cloud that rains?")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning.EmbeddingRouterChain‚ÄãThe EmbeddingRouterChain uses embeddings and similarity to route between destination chains.from langchain.chains.router.embedding_router import EmbeddingRouterChainfrom langchain.embeddings import CohereEmbeddingsfrom langchain.vectorstores import Chromanames_and_descriptions = [ ("physics", ["for questions about physics"]), ("math", ["for questions about math"]),]router_chain = EmbeddingRouterChain.from_names_and_descriptions( names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"]) Using embedded DuckDB without persistence: data will be transientchain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal
1,703
body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation.print( chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'} > Finished chain. ? Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43.PreviousLLMNextSequentialLLMRouterChainEmbeddingRouterChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. ->: body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation.print( chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'} > Finished chain. ? Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43.PreviousLLMNextSequentialLLMRouterChainEmbeddingRouterChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,704
Sequential | 🦜️🔗 Langchain
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: Sequential | 🦜️🔗 Langchain
1,705
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalSequentialOn this pageSequentialThe next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples of how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.from langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate# This is an LLMChain to write a synopsis given a title of a play.llm = OpenAI(temperature=.7)synopsis_template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""synopsis_prompt_template = PromptTemplate(input_variables=["title"], template=synopsis_template)synopsis_chain = LLMChain(llm=llm, prompt=synopsis_prompt_template)# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalSequentialOn this pageSequentialThe next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples of how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.from langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate# This is an LLMChain to write a synopsis given a title of a play.llm = OpenAI(temperature=.7)synopsis_template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""synopsis_prompt_template = PromptTemplate(input_variables=["title"], template=synopsis_template)synopsis_chain = LLMChain(llm=llm, prompt=synopsis_prompt_template)# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play
1,706
is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template)# This is the overall chain where we run these two chains in sequence.from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever. The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances,
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template)# This is the overall chain where we run these two chains in sequence.from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever. The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances,
1,707
compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain.print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.Sequential Chain‚ÄãOf course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.Of particular importance is how we name the input/output variables. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.# This is an LLMChain to write a synopsis given a title of a play and the era
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain.print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.Sequential Chain‚ÄãOf course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.Of particular importance is how we name the input/output variables. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.# This is an LLMChain to write a synopsis given a title of a play and the era
1,708
a synopsis given a title of a play and the era it is set in.llm = OpenAI(temperature=.7)synopsis_template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:"""synopsis_prompt_template = PromptTemplate(input_variables=["title", "era"], template=synopsis_template)synopsis_chain = LLMChain(llm=llm, prompt=synopsis_prompt_template, output_key="synopsis")# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")# This is the overall chain where we run these two chains in sequence.from langchain.chains import SequentialChainoverall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: a synopsis given a title of a play and the era it is set in.llm = OpenAI(temperature=.7)synopsis_template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:"""synopsis_prompt_template = PromptTemplate(input_variables=["title", "era"], template=synopsis_template)synopsis_chain = LLMChain(llm=llm, prompt=synopsis_prompt_template, output_key="synopsis")# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")# This is the overall chain where we run these two chains in sequence.from langchain.chains import SequentialChainoverall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who
1,709
their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.", 'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}Memory in Sequential Chains‚ÄãSometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.", 'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}Memory in Sequential Chains‚ÄãSometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your
1,710
way to do manage this and clean up your chains.For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:from langchain.chains import SequentialChainfrom langchain.memory import SimpleMemoryllm = OpenAI(temperature=.7)template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.Here is some context about the time and location of the play:Date and Time: {time}Location: {location}Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:{review}Social Media Post:"""prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: way to do manage this and clean up your chains.For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:from langchain.chains import SequentialChainfrom langchain.memory import SimpleMemoryllm = OpenAI(temperature=.7)template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.Here is some context about the time and location of the play:Date and Time: {time}Location: {location}Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:{review}Social Media Post:"""prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in
1,711
and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}PreviousRouterNextTransformationSequential ChainMemory in Sequential ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. ->: and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}PreviousRouterNextTransformationSequential ChainMemory in Sequential ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,712
Transformation | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalTransformationTransformationThis notebook showcases using a generic transformation chain.As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those.from langchain.chains import TransformChain, LLMChain, SimpleSequentialChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatewith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text}transform_chain = TransformChain( input_variables=["text"], output_variables=["output_text"], transform=transform_func)template = """Summarize this text:{output_text}Summary:"""prompt = PromptTemplate(input_variables=["output_text"], template=template)llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'PreviousSequentialNextDocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook showcases using a generic transformation chain.
This notebook showcases using a generic transformation chain. ->: Transformation | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalTransformationTransformationThis notebook showcases using a generic transformation chain.As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those.from langchain.chains import TransformChain, LLMChain, SimpleSequentialChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatewith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text}transform_chain = TransformChain( input_variables=["text"], output_variables=["output_text"], transform=transform_func)template = """Summarize this text:{output_text}Summary:"""prompt = PromptTemplate(input_variables=["output_text"], template=template)llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'PreviousSequentialNextDocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,713
Text splitting by header | 🦜️🔗 Langchain
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: Text splitting by header | 🦜️🔗 Langchain
1,714
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Text splitting by headerText splitting by headerText splitting for vector storage often uses sentences or other delimiters to keep related text together. But many documents (such as Markdown files) have structure (headers) that can be explicitly used in splitting. The MarkdownHeaderTextSplitter lets a user split Markdown files files based on specified headers. This results in chunks that retain the header(s) that it came from in the metadata.This works nicely w/ SelfQueryRetriever.First, tell the retriever about our splits.Then, query based on the doc structure (e.g., "summarize the doc introduction"). Chunks only from that section of the Document will be filtered and used in chat / Q+A.Let's test this out on an example Notion page!First, I download the page to Markdown as explained here.# Load Notion page as a markdownfile filefrom langchain.document_loaders import NotionDirectoryLoaderpath = "../Notion_DB/"loader = NotionDirectoryLoader(path)docs = loader.load()md_file = docs[0].page_content# Let's create groups based on the section headers in our pagefrom langchain.text_splitter import MarkdownHeaderTextSplitterheaders_to_split_on = [ ("###", "Section"),]markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(md_file)Now, perform text splitting on
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Text splitting by headerText splitting by headerText splitting for vector storage often uses sentences or other delimiters to keep related text together. But many documents (such as Markdown files) have structure (headers) that can be explicitly used in splitting. The MarkdownHeaderTextSplitter lets a user split Markdown files files based on specified headers. This results in chunks that retain the header(s) that it came from in the metadata.This works nicely w/ SelfQueryRetriever.First, tell the retriever about our splits.Then, query based on the doc structure (e.g., "summarize the doc introduction"). Chunks only from that section of the Document will be filtered and used in chat / Q+A.Let's test this out on an example Notion page!First, I download the page to Markdown as explained here.# Load Notion page as a markdownfile filefrom langchain.document_loaders import NotionDirectoryLoaderpath = "../Notion_DB/"loader = NotionDirectoryLoader(path)docs = loader.load()md_file = docs[0].page_content# Let's create groups based on the section headers in our pagefrom langchain.text_splitter import MarkdownHeaderTextSplitterheaders_to_split_on = [ ("###", "Section"),]markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)md_header_splits = markdown_splitter.split_text(md_file)Now, perform text splitting on
1,715
perform text splitting on the header grouped documents. # Define our text splitterfrom langchain.text_splitter import RecursiveCharacterTextSplitterchunk_size = 500chunk_overlap = 0text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)all_splits = text_splitter.split_documents(md_header_splits)This sets us up well do perform metadata filtering based on the document structure.Let's bring this all together by building a vectorstore first.pip install chromadb# Build vectorstore and keep the metadatafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Let's create a SelfQueryRetriever that can filter based upon metadata we defined.# Create retrieverfrom langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfo# Define our metadatametadata_field_info = [ AttributeInfo( name="Section", description="Part of the document that the text comes from", type="string or list[string]", ),]document_content_description = "Major sections of the document"# Define self query retrieverllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)We can see that we can query only for texts in the Introduction of the document!# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled.png)', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: perform text splitting on the header grouped documents. # Define our text splitterfrom langchain.text_splitter import RecursiveCharacterTextSplitterchunk_size = 500chunk_overlap = 0text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap)all_splits = text_splitter.split_documents(md_header_splits)This sets us up well do perform metadata filtering based on the document structure.Let's bring this all together by building a vectorstore first.pip install chromadb# Build vectorstore and keep the metadatafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Let's create a SelfQueryRetriever that can filter based upon metadata we defined.# Create retrieverfrom langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfo# Define our metadatametadata_field_info = [ AttributeInfo( name="Section", description="Part of the document that the text comes from", type="string or list[string]", ),]document_content_description = "Major sections of the document"# Define self query retrieverllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)We can see that we can query only for texts in the Introduction of the document!# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled.png)', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant
1,716
often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled.png)', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]We can also look at other parts of the document.retriever.get_relevant_documents("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]# Testretriever.get_relevant_documents("Summarize the Introduction section of the document") query='Introduction' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Introduction') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled.png)', metadata={'Section': 'Introduction'}), Document(page_content='Q+A systems often use a two-step approach: retrieve relevant text chunks and then synthesize them into an answer. There many ways to approach this. For example, we recently [discussed](https://blog.langchain.dev/auto-evaluation-of-anthropic-100k-context-window/) the Retriever-Less option (at bottom in the below diagram), highlighting the Anthropic 100k context window model. Metadata filtering is an alternative approach that pre-filters chunks based on a user-defined criteria in a VectorDB using', metadata={'Section': 'Introduction'}), Document(page_content='metadata tags prior to semantic search.', metadata={'Section': 'Introduction'})]We can also look at other parts of the document.retriever.get_relevant_documents("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None
1,717
value='Testing') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled%202.png)', metadata={'Section': 'Testing'}), Document(page_content='`SelfQueryRetriever` works well in [many cases](https://twitter.com/hwchase17/status/1656791488569954304/photo/1). For example, given [this test case](https://twitter.com/hwchase17/status/1656791488569954304?s=20): \n![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled%201.png) \nThe query can be nicely broken up into semantic query and metadata filter: \n```python\nsemantic query: "prompt injection"', metadata={'Section': 'Testing'}), Document(page_content='Below, we can see detailed results from the app: \n- Kor extraction is above to perform the transformation between query and metadata format ‚úÖ\n- Self-querying attempts to filter using the episode ID (`252`) in the query and fails üö´\n- Baseline returns docs from 3 different episodes (one from `252`), confusing the answer üö´', metadata={'Section': 'Testing'}), Document(page_content='will use in retrieval [here](https://github.com/langchain-ai/auto-evaluator/blob/main/streamlit/kor_retriever_lex.py).', metadata={'Section': 'Testing'})]Now, we can create chat or Q+A apps that are aware of the explicit document structure. The ability to retain document structure for metadata filtering can be helpful for complicated or longer documents.from langchain.chains import RetrievalQAfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever)qa_chain.run("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None 'The Testing section of the document describes the evaluation of the `SelfQueryRetriever` component in
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: value='Testing') limit=None [Document(page_content='![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled%202.png)', metadata={'Section': 'Testing'}), Document(page_content='`SelfQueryRetriever` works well in [many cases](https://twitter.com/hwchase17/status/1656791488569954304/photo/1). For example, given [this test case](https://twitter.com/hwchase17/status/1656791488569954304?s=20): \n![Untitled](Auto-Evaluation%20of%20Metadata%20Filtering%2018502448c85240828f33716740f9574b/Untitled%201.png) \nThe query can be nicely broken up into semantic query and metadata filter: \n```python\nsemantic query: "prompt injection"', metadata={'Section': 'Testing'}), Document(page_content='Below, we can see detailed results from the app: \n- Kor extraction is above to perform the transformation between query and metadata format ‚úÖ\n- Self-querying attempts to filter using the episode ID (`252`) in the query and fails üö´\n- Baseline returns docs from 3 different episodes (one from `252`), confusing the answer üö´', metadata={'Section': 'Testing'}), Document(page_content='will use in retrieval [here](https://github.com/langchain-ai/auto-evaluator/blob/main/streamlit/kor_retriever_lex.py).', metadata={'Section': 'Testing'})]Now, we can create chat or Q+A apps that are aware of the explicit document structure. The ability to retain document structure for metadata filtering can be helpful for complicated or longer documents.from langchain.chains import RetrievalQAfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever)qa_chain.run("Summarize the Testing section of the document") query='Testing' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Section', value='Testing') limit=None 'The Testing section of the document describes the evaluation of the `SelfQueryRetriever` component in
1,718
of the `SelfQueryRetriever` component in comparison to a baseline model. The evaluation was performed on a test case where the query was broken down into a semantic query and a metadata filter. The results showed that the `SelfQueryRetriever` component was able to perform the transformation between query and metadata format, but failed to filter using the episode ID in the query. The baseline model returned documents from three different episodes, which confused the answer. The `SelfQueryRetriever` component was deemed to work well in many cases and will be used in retrieval.'PreviousAgent with retrieval toolNextRAG over in-memory documentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together.
Text splitting for vector storage often uses sentences or other delimiters to keep related text together. ->: of the `SelfQueryRetriever` component in comparison to a baseline model. The evaluation was performed on a test case where the query was broken down into a semantic query and a metadata filter. The results showed that the `SelfQueryRetriever` component was able to perform the transformation between query and metadata format, but failed to filter using the episode ID in the query. The baseline model returned documents from three different episodes, which confused the answer. The `SelfQueryRetriever` component was deemed to work well in many cases and will be used in retrieval.'PreviousAgent with retrieval toolNextRAG over in-memory documentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,719
Notion DB | 🦜️🔗 Langchain
Notion is a collaboration platform with modified Markdown support that integrates kanban
Notion is a collaboration platform with modified Markdown support that integrates kanban ->: Notion DB | 🦜️🔗 Langchain
1,720
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
Notion is a collaboration platform with modified Markdown support that integrates kanban
Notion is a collaboration platform with modified Markdown support that integrates kanban ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
1,721
and toolkitsMemoryCallbacksChat loadersProvidersMoreNotion DBOn this pageNotion DBNotion is a collaboration platform with modified Markdown support that integrates kanban
Notion is a collaboration platform with modified Markdown support that integrates kanban
Notion is a collaboration platform with modified Markdown support that integrates kanban ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreNotion DBOn this pageNotion DBNotion is a collaboration platform with modified Markdown support that integrates kanban
1,722
boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.Installation and Setup​All instructions are in examples below.Document Loader​We have two different loaders: NotionDirectoryLoader and NotionDBLoader.See a usage example for the NotionDirectoryLoader.from langchain.document_loaders import NotionDirectoryLoaderSee a usage example for the NotionDBLoader.from langchain.document_loaders import NotionDBLoaderPreviousNLPCloudNextNucliaInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Notion is a collaboration platform with modified Markdown support that integrates kanban
Notion is a collaboration platform with modified Markdown support that integrates kanban ->: boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.Installation and Setup​All instructions are in examples below.Document Loader​We have two different loaders: NotionDirectoryLoader and NotionDBLoader.See a usage example for the NotionDirectoryLoader.from langchain.document_loaders import NotionDirectoryLoaderSee a usage example for the NotionDBLoader.from langchain.document_loaders import NotionDBLoaderPreviousNLPCloudNextNucliaInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,723
RAG over in-memory documents | 🦜️🔗 Langchain
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: RAG over in-memory documents | 🦜️🔗 Langchain
1,724
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over in-memory documentsOn this pageRAG over in-memory documentsHere we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.Prepare Data​First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentfrom langchain.prompts import PromptTemplatefrom langchain.indexes.vectorstore import VectorstoreIndexCreatorwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.get_relevant_documents(query)from
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over in-memory documentsOn this pageRAG over in-memory documentsHere we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.Prepare Data​First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentfrom langchain.prompts import PromptTemplatefrom langchain.indexes.vectorstore import VectorstoreIndexCreatorwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.get_relevant_documents(query)from
1,725
= docsearch.get_relevant_documents(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIQuickstart‚ÄãIf you just want to get started as quickly as possible, this is the recommended way to do it:chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'If you want more control and understanding over what is happening, please see the information below.The stuff Chain‚ÄãThis sections shows results of using the stuff Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}The map_reduce Chain‚ÄãThis sections shows results of using the map_reduce Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")query = "What
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: = docsearch.get_relevant_documents(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIQuickstart‚ÄãIf you just want to get started as quickly as possible, this is the recommended way to do it:chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'If you want more control and understanding over what is happening, please see the information below.The stuff Chain‚ÄãThis sections shows results of using the stuff Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}The map_reduce Chain‚ÄãThis sections shows results of using the map_reduce Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")query = "What
1,726
chain_type="map_reduce")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Intermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.Return any relevant text translated into italian.{context}Question: {question}Relevant text, if any, in Italian:"""QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context",
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: chain_type="map_reduce")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Intermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question.Return any relevant text translated into italian.{context}Question: {question}Relevant text, if any, in Italian:"""QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context",
1,727
input_variables=["context", "question"])combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian.If you don't know the answer, just say that you don't know. Don't try to make up an answer.QUESTION: {question}========={summaries}=========Answer in Italian:"""COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}Batch SizeWhen using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:llm = OpenAI(batch_size=5, temperature=0)The refine Chain​This sections shows results of using the refine Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: input_variables=["context", "question"])combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian.If you don't know the answer, just say that you don't know. Don't try to make up an answer.QUESTION: {question}========={summaries}=========Answer in Italian:"""COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}Batch SizeWhen using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:llm = OpenAI(batch_size=5, temperature=0)The refine Chain​This sections shows results of using the refine Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of
1,728
dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}Intermediate StepsWe can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}Intermediate StepsWe can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to
1,729
'\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian.")refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template,)initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n")initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template)chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian.")refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template,)initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n")initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template)chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo
1,730
servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.", "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"], 'output_text': "\n\nIl presidente ha detto che Justice Breyer ha
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.", "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"], 'output_text': "\n\nIl presidente ha detto che Justice Breyer ha
1,731
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}The map-rerank Chain​This sections shows results of using the map-rerank Chain to do question answering with sources.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)query = "What did the president say about Justice Breyer"results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}]Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.from langchain.output_parsers import RegexParseroutput_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"],)prompt_template = """Use the following pieces of context to answer the question
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}The map-rerank Chain​This sections shows results of using the map-rerank Chain to do question answering with sources.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)query = "What did the president say about Justice Breyer"results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}]Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.from langchain.output_parsers import RegexParseroutput_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"],)prompt_template = """Use the following pieces of context to answer the question
1,732
pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:Question: [question here]Helpful Answer In Italian: [answer here]Score: [score between 0 and 100]Begin!Context:---------{context}---------Question: {question}Helpful Answer In Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser,)chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}Document QA with sources‚ÄãWe can also perform document QA and return the sources that were used to answer the question. To do this we'll just need to make sure each document has a "source" key in the metadata, and we'll use the load_qa_with_sources helper to construct our chain:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.qa_with_sources import load_qa_with_sources_chainchain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:Question: [question here]Helpful Answer In Italian: [answer here]Score: [score between 0 and 100]Begin!Context:---------{context}---------Question: {question}Helpful Answer In Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser,)chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}Document QA with sources‚ÄãWe can also perform document QA and return the sources that were used to answer the question. To do this we'll just need to make sure each document has a "source" key in the metadata, and we'll use the load_qa_with_sources helper to construct our chain:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.qa_with_sources import load_qa_with_sources_chainchain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice
1,733
= "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}PreviousText splitting by headerNextRAG using local modelsPrepare DataQuickstartThe stuff ChainThe map_reduce ChainThe refine ChainThe map-rerank ChainDocument QA with sourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains. ->: = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}PreviousText splitting by headerNextRAG using local modelsPrepare DataQuickstartThe stuff ChainThe map_reduce ChainThe refine ChainThe map-rerank ChainDocument QA with sourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,734
Agent with retrieval tool | 🦜️🔗 Langchain
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: Agent with retrieval tool | 🦜️🔗 Langchain
1,735
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Agent with retrieval toolOn this pageAgent with retrieval toolThis is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.To start, we will set up the retriever we want to use, and then turn it into a retriever tool. Next, we will use the high level constructor for this type of agent. Finally, we will walk through how to construct a conversational retrieval agent from components.The Retriever​To start, we need a retriever to use! The code here is mostly just example code. Feel free to use your own retriever and skip to the section on creating a retriever tool.from langchain.document_loaders import TextLoaderloader = TextLoader('../../../../../docs/docs/modules/state_of_the_union.txt')from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsdocuments = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(texts, embeddings)retriever = db.as_retriever()Retriever Tool​Now we need to create a tool for our retriever. The main things we need to pass in are a name for the retriever as well as a description. These will both be used by the language model, so they should
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Agent with retrieval toolOn this pageAgent with retrieval toolThis is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.To start, we will set up the retriever we want to use, and then turn it into a retriever tool. Next, we will use the high level constructor for this type of agent. Finally, we will walk through how to construct a conversational retrieval agent from components.The Retriever​To start, we need a retriever to use! The code here is mostly just example code. Feel free to use your own retriever and skip to the section on creating a retriever tool.from langchain.document_loaders import TextLoaderloader = TextLoader('../../../../../docs/docs/modules/state_of_the_union.txt')from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.embeddings import OpenAIEmbeddingsdocuments = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = FAISS.from_documents(texts, embeddings)retriever = db.as_retriever()Retriever Tool​Now we need to create a tool for our retriever. The main things we need to pass in are a name for the retriever as well as a description. These will both be used by the language model, so they should
1,736
be used by the language model, so they should be informative.from langchain.agents.agent_toolkits import create_retriever_tooltool = create_retriever_tool( retriever, "search_state_of_union", "Searches and returns documents regarding the state-of-the-union.")tools = [tool]Agent Constructor‚ÄãHere, we will use the high level create_conversational_retrieval_agent API to construct the agent.Notice that beside the list of tools, the only thing we need to pass in is a language model to use.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: be used by the language model, so they should be informative.from langchain.agents.agent_toolkits import create_retriever_tooltool = create_retriever_tool( retriever, "search_state_of_union", "Searches and returns documents regarding the state-of-the-union.")tools = [tool]Agent Constructor‚ÄãHere, we will use the high level create_conversational_retrieval_agent API to construct the agent.Notice that beside the list of tools, the only thing we need to pass in is a language model to use.
1,737
Under the hood, this agent is using the OpenAIFunctionsAgent, so we need to use an ChatOpenAI model.from langchain.agents.agent_toolkits import create_conversational_retrieval_agentfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature = 0)agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True)We can now try it out!result = agent_executor({"input": "hi, im bob"}) > Entering new AgentExecutor chain... Hello Bob! How can I assist you today? > Finished chain.result["output"] 'Hello Bob! How can I assist you today?'Notice that it remembers your nameresult = agent_executor({"input": "whats my name?"}) > Entering new AgentExecutor chain... Your name is Bob. > Finished chain.result["output"] 'Your name is Bob.'Notice that it now does retrievalresult = agent_executor({"input": "what did the president say about kentaji brown jackson in the most recent state of the union?"}) > Entering new AgentExecutor chain... Invoking: `search_state_of_union` with `{'query': 'Kentaji Brown Jackson'}` [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source':
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: Under the hood, this agent is using the OpenAIFunctionsAgent, so we need to use an ChatOpenAI model.from langchain.agents.agent_toolkits import create_conversational_retrieval_agentfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature = 0)agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True)We can now try it out!result = agent_executor({"input": "hi, im bob"}) > Entering new AgentExecutor chain... Hello Bob! How can I assist you today? > Finished chain.result["output"] 'Hello Bob! How can I assist you today?'Notice that it remembers your nameresult = agent_executor({"input": "whats my name?"}) > Entering new AgentExecutor chain... Your name is Bob. > Finished chain.result["output"] 'Your name is Bob.'Notice that it now does retrievalresult = agent_executor({"input": "what did the president say about kentaji brown jackson in the most recent state of the union?"}) > Entering new AgentExecutor chain... Invoking: `search_state_of_union` with `{'query': 'Kentaji Brown Jackson'}` [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source':
1,738
legacy of excellence.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n\nWhen they came home, many of the world’s fittest and best trained warriors were never the same. \n\nHeadaches. Numbness. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. \n\nI know. \n\nOne of those soldiers was my son Major Beau Biden. \n\nWe don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n\nBut I’m committed to finding out everything we can. \n\nCommitted to military families like Danielle Robinson from Ohio. \n\nThe widow of Sergeant First Class Heath Robinson. \n\nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n\nStationed near Baghdad, just yards from burn pits the size of football fields. \n\nHeath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: legacy of excellence.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n\nWhen they came home, many of the world’s fittest and best trained warriors were never the same. \n\nHeadaches. Numbness. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. \n\nI know. \n\nOne of those soldiers was my son Major Beau Biden. \n\nWe don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n\nBut I’m committed to finding out everything we can. \n\nCommitted to military families like Danielle Robinson from Ohio. \n\nThe widow of Sergeant First Class Heath Robinson. \n\nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n\nStationed near Baghdad, just yards from burn pits the size of football fields. \n\nHeath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so
1,739
putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n\nI’ve worked on these issues a long time. \n\nI know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'})]In the most recent state of the union, the President mentioned Kentaji Brown Jackson. The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. The President described Judge Ketanji Brown Jackson as one of our nation's top legal minds who will continue Justice Breyer's legacy of excellence. > Finished chain.result["output"] "In the most recent state of the union, the President mentioned Kentaji Brown Jackson. The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'}), Document(page_content='We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n\nI’ve worked on these issues a long time. \n\nI know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.', metadata={'source': '../../../../../docs/docs/modules/state_of_the_union.txt'})]In the most recent state of the union, the President mentioned Kentaji Brown Jackson. The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. The President described Judge Ketanji Brown Jackson as one of our nation's top legal minds who will continue Justice Breyer's legacy of excellence. > Finished chain.result["output"] "In the most recent state of the union, the President mentioned Kentaji Brown Jackson. The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United
1,740
Ketanji Brown Jackson to serve on the United States Supreme Court. The President described Judge Ketanji Brown Jackson as one of our nation's top legal minds who will continue Justice Breyer's legacy of excellence."Notice that the follow up question asks about information previously retrieved, so no need to do another retrievalresult = agent_executor({"input": "how long ago did he nominate her?"}) > Entering new AgentExecutor chain... The President nominated Judge Ketanji Brown Jackson four days ago. > Finished chain.result["output"] 'The President nominated Judge Ketanji Brown Jackson four days ago.'Creating from components‚ÄãWhat actually is going on underneath the hood? Let's take a look so we can understand how to modify going forward.There are a few components:The memoryThe prompt templateThe agentThe agent executor# This is needed for both the memory and the promptmemory_key = "history"The Memory‚ÄãIn this example, we want the agent to remember not only previous conversations, but also previous intermediate steps. For that, we can use AgentTokenBufferMemory. Note that if you want to change whether the agent remembers intermediate steps, or how the long the buffer is, or anything like that you should change this part.from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemorymemory = AgentTokenBufferMemory(memory_key=memory_key, llm=llm)The Prompt Template‚ÄãFor the prompt template, we will use the OpenAIFunctionsAgent default way of creating one, but pass in a system prompt and a placeholder for memory.from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgentfrom langchain.schema.messages import SystemMessagefrom langchain.prompts import MessagesPlaceholdersystem_message = SystemMessage( content=( "Do your best to answer the questions. " "Feel free to use any tools available to look up " "relevant information, only if necessary"
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: Ketanji Brown Jackson to serve on the United States Supreme Court. The President described Judge Ketanji Brown Jackson as one of our nation's top legal minds who will continue Justice Breyer's legacy of excellence."Notice that the follow up question asks about information previously retrieved, so no need to do another retrievalresult = agent_executor({"input": "how long ago did he nominate her?"}) > Entering new AgentExecutor chain... The President nominated Judge Ketanji Brown Jackson four days ago. > Finished chain.result["output"] 'The President nominated Judge Ketanji Brown Jackson four days ago.'Creating from components‚ÄãWhat actually is going on underneath the hood? Let's take a look so we can understand how to modify going forward.There are a few components:The memoryThe prompt templateThe agentThe agent executor# This is needed for both the memory and the promptmemory_key = "history"The Memory‚ÄãIn this example, we want the agent to remember not only previous conversations, but also previous intermediate steps. For that, we can use AgentTokenBufferMemory. Note that if you want to change whether the agent remembers intermediate steps, or how the long the buffer is, or anything like that you should change this part.from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemorymemory = AgentTokenBufferMemory(memory_key=memory_key, llm=llm)The Prompt Template‚ÄãFor the prompt template, we will use the OpenAIFunctionsAgent default way of creating one, but pass in a system prompt and a placeholder for memory.from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgentfrom langchain.schema.messages import SystemMessagefrom langchain.prompts import MessagesPlaceholdersystem_message = SystemMessage( content=( "Do your best to answer the questions. " "Feel free to use any tools available to look up " "relevant information, only if necessary"
1,741
"relevant information, only if necessary" ))prompt = OpenAIFunctionsAgent.create_prompt( system_message=system_message, extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)] )The Agent​We will use the OpenAIFunctionsAgentagent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)The Agent Executor​Importantly, we pass in return_intermediate_steps=True since we are recording that with our memory objectfrom langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True)result = agent_executor({"input": "hi, im bob"}) > Entering new AgentExecutor chain... Hello Bob! How can I assist you today? > Finished chain.result = agent_executor({"input": "whats my name"}) > Entering new AgentExecutor chain... Your name is Bob. > Finished chain.PreviousAnalyze a single long documentNextText splitting by headerThe RetrieverRetriever ToolAgent ConstructorCreating from componentsThe MemoryThe Prompt TemplateThe AgentThe Agent ExecutorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.
This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. ->: "relevant information, only if necessary" ))prompt = OpenAIFunctionsAgent.create_prompt( system_message=system_message, extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)] )The Agent​We will use the OpenAIFunctionsAgentagent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)The Agent Executor​Importantly, we pass in return_intermediate_steps=True since we are recording that with our memory objectfrom langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True)result = agent_executor({"input": "hi, im bob"}) > Entering new AgentExecutor chain... Hello Bob! How can I assist you today? > Finished chain.result = agent_executor({"input": "whats my name"}) > Entering new AgentExecutor chain... Your name is Bob. > Finished chain.PreviousAnalyze a single long documentNextText splitting by headerThe RetrieverRetriever ToolAgent ConstructorCreating from componentsThe MemoryThe Prompt TemplateThe AgentThe Agent ExecutorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,742
Analyze a single long document | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Analyze a single long documentAnalyze a single long documentThe AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.llms import OpenAIfrom langchain.chains import AnalyzeDocumentChainllm = OpenAI(temperature=0)from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(llm, chain_type="map_reduce")qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.'PreviousRemembering chat historyNextAgent with retrieval toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
The AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. ->: Analyze a single long document | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Analyze a single long documentAnalyze a single long documentThe AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()from langchain.llms import OpenAIfrom langchain.chains import AnalyzeDocumentChainllm = OpenAI(temperature=0)from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(llm, chain_type="map_reduce")qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.'PreviousRemembering chat historyNextAgent with retrieval toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,743
Chatbots | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: Chatbots | 🦜️🔗 Langchain
1,744
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingChatbotsOn this pageChatbotsUse case​Chatbots are one of the central LLM use-cases. The core features of chatbots are that they can have long-running conversations and have access to information that users want to know about.Aside from basic prompting and LLMs, memory and retrieval are the core components of a chatbot. Memory allows a chatbot to remember past interactions, and retrieval provides a chatbot with up-to-date, domain-specific information.Overview​The chat model interface is based around messages rather than raw text. Several components are important to consider for chat:chat model: See here for a list of chat model integrations and here for documentation on the chat model interface in LangChain. You can use LLMs (see here) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.prompt template: Prompt templates make it easy to assemble prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.memory: See here for in-depth documentation on memory typesretriever (optional): See here for in-depth documentation on retrieval systems. These are useful if you want to build a chatbot with domain-specific knowledge.Quickstart​Here's a quick preview of how we can create chatbot interfaces. First let's install some dependencies and set the required credentials:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()With a plain chat model, we can get chat completions by passing one or more messages to the model.The chat model will respond with a message.from langchain.schema import ( AIMessage,
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingChatbotsOn this pageChatbotsUse case​Chatbots are one of the central LLM use-cases. The core features of chatbots are that they can have long-running conversations and have access to information that users want to know about.Aside from basic prompting and LLMs, memory and retrieval are the core components of a chatbot. Memory allows a chatbot to remember past interactions, and retrieval provides a chatbot with up-to-date, domain-specific information.Overview​The chat model interface is based around messages rather than raw text. Several components are important to consider for chat:chat model: See here for a list of chat model integrations and here for documentation on the chat model interface in LangChain. You can use LLMs (see here) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.prompt template: Prompt templates make it easy to assemble prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.memory: See here for in-depth documentation on memory typesretriever (optional): See here for in-depth documentation on retrieval systems. These are useful if you want to build a chatbot with domain-specific knowledge.Quickstart​Here's a quick preview of how we can create chatbot interfaces. First let's install some dependencies and set the required credentials:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()With a plain chat model, we can get chat completions by passing one or more messages to the model.The chat model will respond with a message.from langchain.schema import ( AIMessage,
1,745
langchain.schema import ( AIMessage, HumanMessage, SystemMessage)from langchain.chat_models import ChatOpenAIchat = ChatOpenAI()chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")]) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)And if we pass in a list of messages:messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)We can then wrap our chat model in a ConversationChain, which has built-in memory for remembering past user inputs and model outputs.from langchain.chains import ConversationChain conversation = ConversationChain(llm=chat) conversation.run("Translate this sentence from English to French: I love programming.") 'Je adore la programmation.'conversation.run("Translate it to German.") 'Ich liebe Programmieren.'Memory‚ÄãAs we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is ConversationBufferMemory:This memory allows for storing of messages in a bufferWhen called in a chain, it returns all of the messages it has storedLangChain comes with many other types of memory, too. See here for in-depth documentation on memory types.For now let's take a quick look at ConversationBufferMemory. We can manually add a few chat messages to the memory like so:from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory()memory.chat_memory.add_user_message("hi!")memory.chat_memory.add_ai_message("whats up?")And now we can load from our memory. The key method exposed by all Memory classes is load_memory_variables. This takes in any initial chain input and returns a list of memory variables which are added to the chain input. Since this simple memory type doesn't actually
Open In Colab
Open In Colab ->: langchain.schema import ( AIMessage, HumanMessage, SystemMessage)from langchain.chat_models import ChatOpenAIchat = ChatOpenAI()chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")]) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)And if we pass in a list of messages:messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)We can then wrap our chat model in a ConversationChain, which has built-in memory for remembering past user inputs and model outputs.from langchain.chains import ConversationChain conversation = ConversationChain(llm=chat) conversation.run("Translate this sentence from English to French: I love programming.") 'Je adore la programmation.'conversation.run("Translate it to German.") 'Ich liebe Programmieren.'Memory‚ÄãAs we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is ConversationBufferMemory:This memory allows for storing of messages in a bufferWhen called in a chain, it returns all of the messages it has storedLangChain comes with many other types of memory, too. See here for in-depth documentation on memory types.For now let's take a quick look at ConversationBufferMemory. We can manually add a few chat messages to the memory like so:from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory()memory.chat_memory.add_user_message("hi!")memory.chat_memory.add_ai_message("whats up?")And now we can load from our memory. The key method exposed by all Memory classes is load_memory_variables. This takes in any initial chain input and returns a list of memory variables which are added to the chain input. Since this simple memory type doesn't actually
1,746
Since this simple memory type doesn't actually take into account the chain input when loading memory, we can pass in an empty input for now:memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'}We can also keep a sliding window of the most recent k interactions using ConversationBufferWindowMemory.from langchain.memory import ConversationBufferWindowMemorymemory = ConversationBufferWindowMemory(k=1)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'}ConversationSummaryMemory is an extension of this theme.It creates a summary of the conversation over time. This memory is most useful for longer conversations where the full message history would consume many tokens.from langchain.llms import OpenAIfrom langchain.memory import ConversationSummaryMemoryllm = OpenAI(temperature=0)memory = ConversationSummaryMemory(llm=llm)memory.save_context({"input": "hi"},{"output": "whats up"})memory.save_context({"input": "im working on better docs for chatbots"},{"output": "oh, that sounds like a lot of work"})memory.save_context({"input": "yes, but it's worth the effort"},{"output": "agreed, good docs are important!"})memory.load_memory_variables({}) {'history': '\nThe human greets the AI, to which the AI responds. The human then mentions they are working on better docs for chatbots, to which the AI responds that it sounds like a lot of work. The human agrees that it is worth the effort, and the AI agrees that good docs are important.'}ConversationSummaryBufferMemory extends this a bit further:It uses token length rather than number of interactions to determine when to flush interactions.from langchain.memory import ConversationSummaryBufferMemorymemory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)memory.save_context({"input": "hi"}, {"output": "whats
Open In Colab
Open In Colab ->: Since this simple memory type doesn't actually take into account the chain input when loading memory, we can pass in an empty input for now:memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'}We can also keep a sliding window of the most recent k interactions using ConversationBufferWindowMemory.from langchain.memory import ConversationBufferWindowMemorymemory = ConversationBufferWindowMemory(k=1)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'}ConversationSummaryMemory is an extension of this theme.It creates a summary of the conversation over time. This memory is most useful for longer conversations where the full message history would consume many tokens.from langchain.llms import OpenAIfrom langchain.memory import ConversationSummaryMemoryllm = OpenAI(temperature=0)memory = ConversationSummaryMemory(llm=llm)memory.save_context({"input": "hi"},{"output": "whats up"})memory.save_context({"input": "im working on better docs for chatbots"},{"output": "oh, that sounds like a lot of work"})memory.save_context({"input": "yes, but it's worth the effort"},{"output": "agreed, good docs are important!"})memory.load_memory_variables({}) {'history': '\nThe human greets the AI, to which the AI responds. The human then mentions they are working on better docs for chatbots, to which the AI responds that it sounds like a lot of work. The human agrees that it is worth the effort, and the AI agrees that good docs are important.'}ConversationSummaryBufferMemory extends this a bit further:It uses token length rather than number of interactions to determine when to flush interactions.from langchain.memory import ConversationSummaryBufferMemorymemory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)memory.save_context({"input": "hi"}, {"output": "whats
1,747
"hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})Conversation‚ÄãWe can unpack what goes under the hood with ConversationChain. We can specify our memory, ConversationSummaryMemory and we can specify the prompt. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.chains import LLMChain# LLMllm = ChatOpenAI()# Prompt prompt = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template( "You are a nice chatbot having a conversation with a human." ), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}") ])# Notice that we `return_messages=True` to fit into the MessagesPlaceholder# Notice that `"chat_history"` aligns with the MessagesPlaceholder namememory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)conversation = LLMChain( llm=llm, prompt=prompt, verbose=True, memory=memory)# Notice that we just pass in the `question` variables - `chat_history` gets populated by memoryconversation({"question": "hi"}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi > Finished chain. {'question': 'hi', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)], 'text': 'Hello! How can I assist you today?'}conversation({"question": "Translate this sentence from English to French: I love programming."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I
Open In Colab
Open In Colab ->: "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})Conversation‚ÄãWe can unpack what goes under the hood with ConversationChain. We can specify our memory, ConversationSummaryMemory and we can specify the prompt. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.chains import LLMChain# LLMllm = ChatOpenAI()# Prompt prompt = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template( "You are a nice chatbot having a conversation with a human." ), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}") ])# Notice that we `return_messages=True` to fit into the MessagesPlaceholder# Notice that `"chat_history"` aligns with the MessagesPlaceholder namememory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)conversation = LLMChain( llm=llm, prompt=prompt, verbose=True, memory=memory)# Notice that we just pass in the `question` variables - `chat_history` gets populated by memoryconversation({"question": "hi"}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi > Finished chain. {'question': 'hi', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)], 'text': 'Hello! How can I assist you today?'}conversation({"question": "Translate this sentence from English to French: I love programming."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I
1,748
a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. > Finished chain. {'question': 'Translate this sentence from English to French: I love programming.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False)], 'text': 'Sure! The translation of "I love programming" from English to French is "J\'adore programmer."'}conversation({"question": "Now translate the sentence to German."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. AI: Sure! The translation of "I love programming" from English to French is "J'adore programmer." Human: Now translate the sentence to German. > Finished chain. {'question': 'Now translate the sentence to German.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False), HumanMessage(content='Now translate the sentence to German.', additional_kwargs={},
Open In Colab
Open In Colab ->: a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. > Finished chain. {'question': 'Translate this sentence from English to French: I love programming.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False)], 'text': 'Sure! The translation of "I love programming" from English to French is "J\'adore programmer."'}conversation({"question": "Now translate the sentence to German."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. AI: Sure! The translation of "I love programming" from English to French is "J'adore programmer." Human: Now translate the sentence to German. > Finished chain. {'question': 'Now translate the sentence to German.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False), HumanMessage(content='Now translate the sentence to German.', additional_kwargs={},
1,749
the sentence to German.', additional_kwargs={}, example=False), AIMessage(content='Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."', additional_kwargs={}, example=False)], 'text': 'Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."'}We can see the chat history preserved in the prompt using the LangSmith trace.Chat Retrieval‚ÄãNow, suppose we want to chat with documents or some other source of knowledge.This is popular use case, combining chat with document retrieval.It allows us to chat with specific information that the model was not trained on.pip install tiktoken chromadbLoad a blog post.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Split and store this in a vector.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Create our memory, as before, but's let's use ConversationSummaryMemory.memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)from langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI()retriever = vectorstore.as_retriever()qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)qa("How do agents use Task decomposition?") {'question': 'How do agents use Task decomposition?', 'chat_history': [SystemMessage(content='', additional_kwargs={})], 'answer': 'Agents can use task decomposition in several ways:\n\n1. Simple prompting: Agents can use Language Model based prompting to break
Open In Colab
Open In Colab ->: the sentence to German.', additional_kwargs={}, example=False), AIMessage(content='Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."', additional_kwargs={}, example=False)], 'text': 'Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."'}We can see the chat history preserved in the prompt using the LangSmith trace.Chat Retrieval‚ÄãNow, suppose we want to chat with documents or some other source of knowledge.This is popular use case, combining chat with document retrieval.It allows us to chat with specific information that the model was not trained on.pip install tiktoken chromadbLoad a blog post.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Split and store this in a vector.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Create our memory, as before, but's let's use ConversationSummaryMemory.memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)from langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI()retriever = vectorstore.as_retriever()qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)qa("How do agents use Task decomposition?") {'question': 'How do agents use Task decomposition?', 'chat_history': [SystemMessage(content='', additional_kwargs={})], 'answer': 'Agents can use task decomposition in several ways:\n\n1. Simple prompting: Agents can use Language Model based prompting to break
1,750
can use Language Model based prompting to break down tasks into subgoals. For example, by providing prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?", the agent can generate a sequence of smaller steps that lead to the completion of the overall task.\n\n2. Task-specific instructions: Agents can be given task-specific instructions to guide their planning process. For example, if the task is to write a novel, the agent can be instructed to "Write a story outline." This provides a high-level structure for the task and helps in breaking it down into smaller components.\n\n3. Human inputs: Agents can also take inputs from humans to decompose tasks. This can be done through direct communication or by leveraging human expertise. Humans can provide guidance and insights to help the agent break down complex tasks into manageable subgoals.\n\nOverall, task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.'}qa("What are the various ways to implement memory to support it?") {'question': 'What are the various ways to implement memory to support it?', 'chat_history': [SystemMessage(content='The human asks how agents use task decomposition. The AI explains that agents can use task decomposition in several ways, including simple prompting, task-specific instructions, and human inputs. Task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.', additional_kwargs={})], 'answer': 'There are several ways to implement memory to support task decomposition:\n\n1. Long-Term Memory Management: This involves storing and organizing information in a long-term memory system. The agent can retrieve past experiences, knowledge, and learned strategies to guide the task decomposition process.\n\n2. Internet Access: The agent can use internet access to search
Open In Colab
Open In Colab ->: can use Language Model based prompting to break down tasks into subgoals. For example, by providing prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?", the agent can generate a sequence of smaller steps that lead to the completion of the overall task.\n\n2. Task-specific instructions: Agents can be given task-specific instructions to guide their planning process. For example, if the task is to write a novel, the agent can be instructed to "Write a story outline." This provides a high-level structure for the task and helps in breaking it down into smaller components.\n\n3. Human inputs: Agents can also take inputs from humans to decompose tasks. This can be done through direct communication or by leveraging human expertise. Humans can provide guidance and insights to help the agent break down complex tasks into manageable subgoals.\n\nOverall, task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.'}qa("What are the various ways to implement memory to support it?") {'question': 'What are the various ways to implement memory to support it?', 'chat_history': [SystemMessage(content='The human asks how agents use task decomposition. The AI explains that agents can use task decomposition in several ways, including simple prompting, task-specific instructions, and human inputs. Task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.', additional_kwargs={})], 'answer': 'There are several ways to implement memory to support task decomposition:\n\n1. Long-Term Memory Management: This involves storing and organizing information in a long-term memory system. The agent can retrieve past experiences, knowledge, and learned strategies to guide the task decomposition process.\n\n2. Internet Access: The agent can use internet access to search
1,751
The agent can use internet access to search for relevant information and gather resources to aid in task decomposition. This allows the agent to access a vast amount of information and utilize it in the decomposition process.\n\n3. GPT-3.5 Powered Agents: The agent can delegate simple tasks to GPT-3.5 powered agents. These agents can perform specific tasks or provide assistance in task decomposition, allowing the main agent to focus on higher-level planning and decision-making.\n\n4. File Output: The agent can store the results of task decomposition in files or documents. This allows for easy retrieval and reference during the execution of the task.\n\nThese memory resources help the agent in organizing and managing information, making informed decisions, and effectively decomposing complex tasks into smaller, manageable subgoals.'}Again, we can use the LangSmith trace to explore the prompt structure.Going deeper​Agents, such as the conversational retrieval agent, can be used for retrieval when necessary while also holding a conversation.PreviousInteracting with APIsNextExtractionUse caseOverviewQuickstartMemoryConversationChat RetrievalGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: The agent can use internet access to search for relevant information and gather resources to aid in task decomposition. This allows the agent to access a vast amount of information and utilize it in the decomposition process.\n\n3. GPT-3.5 Powered Agents: The agent can delegate simple tasks to GPT-3.5 powered agents. These agents can perform specific tasks or provide assistance in task decomposition, allowing the main agent to focus on higher-level planning and decision-making.\n\n4. File Output: The agent can store the results of task decomposition in files or documents. This allows for easy retrieval and reference during the execution of the task.\n\nThese memory resources help the agent in organizing and managing information, making informed decisions, and effectively decomposing complex tasks into smaller, manageable subgoals.'}Again, we can use the LangSmith trace to explore the prompt structure.Going deeper​Agents, such as the conversational retrieval agent, can be used for retrieval when necessary while also holding a conversation.PreviousInteracting with APIsNextExtractionUse caseOverviewQuickstartMemoryConversationChat RetrievalGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,752
Retrieval-augmented generation (RAG) | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: Retrieval-augmented generation (RAG) | 🦜️🔗 Langchain
1,753
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)On this pageRetrieval-augmented generation (RAG)Use case​Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this.In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:QA over structured data (e.g., SQL)QA over code (e.g., Python)Overview​The pipeline for converting raw unstructured data into a QA chain looks like this:Loading: First we need to load our data. Use the LangChain integration hub to browse the full set of loaders. Splitting: Text splitters break Documents into splits of specified sizeStorage: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval: The app retrieves splits from storage (e.g., often with similar embeddings to the input question)Generation: An LLM produces an answer using a prompt that includes the question and the retrieved dataQuickstart​Suppose we want a QA app over this blog post. We can create this in a few lines of code. First set environment variables and install packages:pip install langchain openai chromadb langchainhub# Set env var OPENAI_API_KEY or load from a .env file# import dotenv#
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)On this pageRetrieval-augmented generation (RAG)Use case​Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this.In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:QA over structured data (e.g., SQL)QA over code (e.g., Python)Overview​The pipeline for converting raw unstructured data into a QA chain looks like this:Loading: First we need to load our data. Use the LangChain integration hub to browse the full set of loaders. Splitting: Text splitters break Documents into splits of specified sizeStorage: Storage (e.g., often a vectorstore) will house and often embed the splitsRetrieval: The app retrieves splits from storage (e.g., often with similar embeddings to the input question)Generation: An LLM produces an answer using a prompt that includes the question and the retrieved dataQuickstart​Suppose we want a QA app over this blog post. We can create this in a few lines of code. First set environment variables and install packages:pip install langchain openai chromadb langchainhub# Set env var OPENAI_API_KEY or load from a .env file# import dotenv#
1,754
or load from a .env file# import dotenv# dotenv.load_dotenv()# Load documentsfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")# Split documentsfrom langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)splits = text_splitter.split_documents(loader.load())# Embed and store splitsfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsvectorstore = Chroma.from_documents(documents=splits,embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# Prompt # https://smith.langchain.com/hub/rlm/rag-promptfrom langchain import hubrag_prompt = hub.pull("rlm/rag-prompt")# LLMfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)# RAG chain from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Here is the LangSmith trace for this chain.Below we will explain each step in more detail.Step 1. Load‚ÄãSpecify a DocumentLoader to load in your unstructured data as Documents. A Document is a dict with text (page_content) and metadata.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Go deeper‚ÄãBrowse the > 160 data loader integrations here.See further documentation on loaders here.Step 2. Split‚ÄãSplit the Document into chunks for embedding and vector storage.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size =
Open In Colab
Open In Colab ->: or load from a .env file# import dotenv# dotenv.load_dotenv()# Load documentsfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")# Split documentsfrom langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)splits = text_splitter.split_documents(loader.load())# Embed and store splitsfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsvectorstore = Chroma.from_documents(documents=splits,embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()# Prompt # https://smith.langchain.com/hub/rlm/rag-promptfrom langchain import hubrag_prompt = hub.pull("rlm/rag-prompt")# LLMfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)# RAG chain from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Here is the LangSmith trace for this chain.Below we will explain each step in more detail.Step 1. Load‚ÄãSpecify a DocumentLoader to load in your unstructured data as Documents. A Document is a dict with text (page_content) and metadata.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Go deeper‚ÄãBrowse the > 160 data loader integrations here.See further documentation on loaders here.Step 2. Split‚ÄãSplit the Document into chunks for embedding and vector storage.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size =
1,755
= RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)all_splits = text_splitter.split_documents(data)Go deeper‚ÄãDocumentSplitters are just one type of the more generic DocumentTransformers.See further documentation on transformers here.Context-aware splitters keep the location ("context") of each split in the original Document:Markdown filesCode (py or js)DocumentsStep 3. Store‚ÄãTo be able to look up our document splits, we first need to store them where we can later look them up.The most common way to do this is to embed the contents of each document split.We store the embedding and splits in a vectorstore.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Go deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.Here are Steps 1-3:Step 4. Retrieve‚ÄãRetrieve relevant splits for any question using similarity search.This is simply "top K" retrieval where we select documents based on embedding similarity to the query.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4Go deeper‚ÄãVectorstores are commonly used for retrieval, but they are not the only option. For example, SVMs (see thread here) can also be used.LangChain has many retrievers including, but not limited to, vectorstores. All retrievers implement a common method get_relevant_documents() (and its asynchronous variant aget_relevant_documents()).from langchain.retrievers import SVMRetrieversvm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())docs_svm=svm_retriever.get_relevant_documents(question)len(docs_svm) 4Some common ways to improve on vector similarity search include:MultiQueryRetriever generates variants of the input
Open In Colab
Open In Colab ->: = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)all_splits = text_splitter.split_documents(data)Go deeper‚ÄãDocumentSplitters are just one type of the more generic DocumentTransformers.See further documentation on transformers here.Context-aware splitters keep the location ("context") of each split in the original Document:Markdown filesCode (py or js)DocumentsStep 3. Store‚ÄãTo be able to look up our document splits, we first need to store them where we can later look them up.The most common way to do this is to embed the contents of each document split.We store the embedding and splits in a vectorstore.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Go deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.Here are Steps 1-3:Step 4. Retrieve‚ÄãRetrieve relevant splits for any question using similarity search.This is simply "top K" retrieval where we select documents based on embedding similarity to the query.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4Go deeper‚ÄãVectorstores are commonly used for retrieval, but they are not the only option. For example, SVMs (see thread here) can also be used.LangChain has many retrievers including, but not limited to, vectorstores. All retrievers implement a common method get_relevant_documents() (and its asynchronous variant aget_relevant_documents()).from langchain.retrievers import SVMRetrieversvm_retriever = SVMRetriever.from_documents(all_splits,OpenAIEmbeddings())docs_svm=svm_retriever.get_relevant_documents(question)len(docs_svm) 4Some common ways to improve on vector similarity search include:MultiQueryRetriever generates variants of the input
1,756
generates variants of the input question to improve retrieval.Max marginal relevance selects for relevance and diversity among the retrieved documents.Documents can be filtered during retrieval using metadata filters.import loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverlogging.basicConfig()logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=ChatOpenAI(temperature=0))unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs)In addition, a useful concept for improving retrieval is decoupling the documents from the embedded search key.For example, we can embed a document summary or question that are likely to lead to the document being retrieved.See details in here on the multi-vector retriever for this purpose.Step 5. Generate‚ÄãDistill the retrieved documents into an answer using an LLM/Chat model (e.g., gpt-3.5-turbo).We use the Runnable protocol to define the chain.Runnable protocol pipes together components in a transparent way.We used a prompt for RAG that is checked into the LangChain prompt hub (here).from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Go deeper‚ÄãChoosing LLMs‚ÄãBrowse the > 90 LLM and chat model integrations here.See further documentation on LLMs and chat models here.See a guide on local LLMS here.Customizing the prompt‚ÄãAs shown
Open In Colab
Open In Colab ->: generates variants of the input question to improve retrieval.Max marginal relevance selects for relevance and diversity among the retrieved documents.Documents can be filtered during retrieval using metadata filters.import loggingfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers.multi_query import MultiQueryRetrieverlogging.basicConfig()logging.getLogger('langchain.retrievers.multi_query').setLevel(logging.INFO)retriever_from_llm = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=ChatOpenAI(temperature=0))unique_docs = retriever_from_llm.get_relevant_documents(query=question)len(unique_docs)In addition, a useful concept for improving retrieval is decoupling the documents from the embedded search key.For example, we can embed a document summary or question that are likely to lead to the document being retrieved.See details in here on the multi-vector retriever for this purpose.Step 5. Generate‚ÄãDistill the retrieved documents into an answer using an LLM/Chat model (e.g., gpt-3.5-turbo).We use the Runnable protocol to define the chain.Runnable protocol pipes together components in a transparent way.We used a prompt for RAG that is checked into the LangChain prompt hub (here).from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)from langchain.schema.runnable import RunnablePassthroughrag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a task into smaller subgoals or steps. It can be done using simple prompting, task-specific instructions, or human inputs.')Go deeper‚ÄãChoosing LLMs‚ÄãBrowse the > 90 LLM and chat model integrations here.See further documentation on LLMs and chat models here.See a guide on local LLMS here.Customizing the prompt‚ÄãAs shown
1,757
local LLMS here.Customizing the prompt​As shown above, we can load prompts (e.g., this RAG prompt) from the prompt hub.The prompt can also be easily customized, as shown below.from langchain.prompts import PromptTemplatetemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""rag_prompt_custom = PromptTemplate.from_template(template)rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a complicated task into smaller, more manageable subtasks or steps. It can be done using prompts, task-specific instructions, or human inputs. Thanks for asking!')We can use LangSmith to see the trace.PreviousSQLNextRAG over codeUse caseOverviewQuickstartStep 1. LoadGo deeperStep 2. SplitGo deeperStep 3. StoreGo deeperStep 4. RetrieveGo deeperStep 5. GenerateGo deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: local LLMS here.Customizing the prompt​As shown above, we can load prompts (e.g., this RAG prompt) from the prompt hub.The prompt can also be easily customized, as shown below.from langchain.prompts import PromptTemplatetemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer. {context}Question: {question}Helpful Answer:"""rag_prompt_custom = PromptTemplate.from_template(template)rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | llm )rag_chain.invoke("What is Task Decomposition?") AIMessage(content='Task decomposition is the process of breaking down a complicated task into smaller, more manageable subtasks or steps. It can be done using prompts, task-specific instructions, or human inputs. Thanks for asking!')We can use LangSmith to see the trace.PreviousSQLNextRAG over codeUse caseOverviewQuickstartStep 1. LoadGo deeperStep 2. SplitGo deeperStep 3. StoreGo deeperStep 4. RetrieveGo deeperStep 5. GenerateGo deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,758
RAG over code | 🦜️🔗 Langchain
Open In Collab
Open In Collab ->: RAG over code | 🦜️🔗 Langchain
1,759
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over codeOn this pageRAG over codeUse case​Source code analysis is one of the most popular LLM applications (e.g., GitHub Co-Pilot, Code Interpreter, Codium, and Codeium) for use-cases such as:Q&A over the code base to understand how it worksUsing LLMs for suggesting refactors or improvementsUsing LLMs for documenting the codeOverview​The pipeline for QA over code follows the steps we do for document question answering, with some differences:In particular, we can employ a splitting strategy that does a few things:Keeps each top-level function and class in the code is loaded into separate documents. Puts remaining into a separate document.Retains metadata about where each split comes fromQuickstart​pip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We'll follow the structure of this notebook and employ context aware code splitting.Loading​We will upload all python project files using the langchain.document_loaders.TextLoader.The following script iterates over the files in the LangChain repository and loads every .py file (a.k.a. documents):# from git import Repofrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParser#
Open In Collab
Open In Collab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)RAG over codeOn this pageRAG over codeUse case​Source code analysis is one of the most popular LLM applications (e.g., GitHub Co-Pilot, Code Interpreter, Codium, and Codeium) for use-cases such as:Q&A over the code base to understand how it worksUsing LLMs for suggesting refactors or improvementsUsing LLMs for documenting the codeOverview​The pipeline for QA over code follows the steps we do for document question answering, with some differences:In particular, we can employ a splitting strategy that does a few things:Keeps each top-level function and class in the code is loaded into separate documents. Puts remaining into a separate document.Retains metadata about where each split comes fromQuickstart​pip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We'll follow the structure of this notebook and employ context aware code splitting.Loading​We will upload all python project files using the langchain.document_loaders.TextLoader.The following script iterates over the files in the LangChain repository and loads every .py file (a.k.a. documents):# from git import Repofrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParser#
1,760
import LanguageParser# Clonerepo_path = "/Users/rlm/Desktop/test_repo"# repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)We load the py code using LanguageParser, which will:Keep top-level functions and classes together (into a single document)Put remaining code into a separate documentRetains metadata about where each split comes from# Loadloader = GenericLoader.from_filesystem( repo_path+"/libs/langchain/langchain", glob="**/*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=500))documents = loader.load()len(documents) 1293Splitting‚ÄãSplit the Document into chunks for embedding and vector storage.We can use RecursiveCharacterTextSplitter w/ language specified.from langchain.text_splitter import RecursiveCharacterTextSplitterpython_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.PYTHON, chunk_size=2000, chunk_overlap=200)texts = python_splitter.split_documents(documents)len(texts) 3748RetrievalQA‚ÄãWe need to store the documents in a way we can semantically search for their content. The most common approach is to embed the contents of each document then store the embedding and document in a vector store. When setting up the vectorstore retriever:We test max marginal relevance for retrievalAnd 8 documents returnedGo deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.from langchain.vectorstores import Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsdb = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8},)Chat‚ÄãTest chat, just as we do for
Open In Collab
Open In Collab ->: import LanguageParser# Clonerepo_path = "/Users/rlm/Desktop/test_repo"# repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)We load the py code using LanguageParser, which will:Keep top-level functions and classes together (into a single document)Put remaining code into a separate documentRetains metadata about where each split comes from# Loadloader = GenericLoader.from_filesystem( repo_path+"/libs/langchain/langchain", glob="**/*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=500))documents = loader.load()len(documents) 1293Splitting‚ÄãSplit the Document into chunks for embedding and vector storage.We can use RecursiveCharacterTextSplitter w/ language specified.from langchain.text_splitter import RecursiveCharacterTextSplitterpython_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.PYTHON, chunk_size=2000, chunk_overlap=200)texts = python_splitter.split_documents(documents)len(texts) 3748RetrievalQA‚ÄãWe need to store the documents in a way we can semantically search for their content. The most common approach is to embed the contents of each document then store the embedding and document in a vector store. When setting up the vectorstore retriever:We test max marginal relevance for retrievalAnd 8 documents returnedGo deeper‚ÄãBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.from langchain.vectorstores import Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsdb = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8},)Chat‚ÄãTest chat, just as we do for
1,761
8},)Chat‚ÄãTest chat, just as we do for chatbots.Go deeper‚ÄãBrowse the > 55 LLM and chat model integrations here.See further documentation on LLMs and chat models here.Use local LLMS: The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.from langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI(model_name="gpt-4") memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)question = "How can I initialize a ReAct agent?"result = qa(question)result['answer'] 'To initialize a ReAct agent, you need to follow these steps:\n\n1. Initialize a language model `llm` of type `BaseLanguageModel`.\n\n2. Initialize a document store `docstore` of type `Docstore`.\n\n3. Create a `DocstoreExplorer` with the initialized `docstore`. The `DocstoreExplorer` is used to search for and look up terms in the document store.\n\n4. Create an array of `Tool` objects. The `Tool` objects represent the actions that the agent can perform. In the case of `ReActDocstoreAgent`, the tools must be "Search" and "Lookup" with their corresponding functions from the `DocstoreExplorer`.\n\n5. Initialize the `ReActDocstoreAgent` using the `from_llm_and_tools` method with the `llm` (language model) and `tools` as parameters.\n\n6. Initialize the `ReActChain` (which is the `AgentExecutor`) using the `ReActDocstoreAgent` and `tools` as parameters.\n\nHere is an example of how to do this:\n\n```python\nfrom langchain.chains import ReActChain, OpenAI\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.tools.base import BaseTool\n\n# Initialize the LLM and a docstore\nllm = OpenAI()\ndocstore = Docstore()\n\ndocstore_explorer = DocstoreExplorer(docstore)\ntools = [\n Tool(\n
Open In Collab
Open In Collab ->: 8},)Chat‚ÄãTest chat, just as we do for chatbots.Go deeper‚ÄãBrowse the > 55 LLM and chat model integrations here.See further documentation on LLMs and chat models here.Use local LLMS: The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.from langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI(model_name="gpt-4") memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)question = "How can I initialize a ReAct agent?"result = qa(question)result['answer'] 'To initialize a ReAct agent, you need to follow these steps:\n\n1. Initialize a language model `llm` of type `BaseLanguageModel`.\n\n2. Initialize a document store `docstore` of type `Docstore`.\n\n3. Create a `DocstoreExplorer` with the initialized `docstore`. The `DocstoreExplorer` is used to search for and look up terms in the document store.\n\n4. Create an array of `Tool` objects. The `Tool` objects represent the actions that the agent can perform. In the case of `ReActDocstoreAgent`, the tools must be "Search" and "Lookup" with their corresponding functions from the `DocstoreExplorer`.\n\n5. Initialize the `ReActDocstoreAgent` using the `from_llm_and_tools` method with the `llm` (language model) and `tools` as parameters.\n\n6. Initialize the `ReActChain` (which is the `AgentExecutor`) using the `ReActDocstoreAgent` and `tools` as parameters.\n\nHere is an example of how to do this:\n\n```python\nfrom langchain.chains import ReActChain, OpenAI\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.tools.base import BaseTool\n\n# Initialize the LLM and a docstore\nllm = OpenAI()\ndocstore = Docstore()\n\ndocstore_explorer = DocstoreExplorer(docstore)\ntools = [\n Tool(\n
1,762
= [\n Tool(\n name="Search",\n func=docstore_explorer.search,\n description="Search for a term in the docstore.",\n ),\n Tool(\n name="Lookup",\n func=docstore_explorer.lookup,\n description="Lookup a term in the docstore.",\n ),\n]\nagent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\nreact = ReActChain(agent=agent, tools=tools)\n```\n\nKeep in mind that this is a simplified example and you might need to adapt it to your specific needs.'questions = [ "What is the class hierarchy?", "What classes are derived from the Chain class?", "What one improvement do you propose in code in relation to the class hierarchy for the Chain class?",]for question in questions: result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is the class hierarchy? **Answer**: The class hierarchy in object-oriented programming is the structure that forms when classes are derived from other classes. The derived class is a subclass of the base class also known as the superclass. This hierarchy is formed based on the concept of inheritance in object-oriented programming where a subclass inherits the properties and functionalities of the superclass. In the given context, we have the following examples of class hierarchies: 1. `BaseCallbackHandler --> <name>CallbackHandler` means `BaseCallbackHandler` is a base class and `<name>CallbackHandler` (like `AimCallbackHandler`, `ArgillaCallbackHandler` etc.) are derived classes that inherit from `BaseCallbackHandler`. 2. `BaseLoader --> <name>Loader` means `BaseLoader` is a base class and `<name>Loader` (like `TextLoader`, `UnstructuredFileLoader` etc.) are derived classes that inherit from `BaseLoader`. 3. `ToolMetaclass --> BaseTool --> <name>Tool` means `ToolMetaclass` is a base class, `BaseTool` is a derived class that inherits from `ToolMetaclass`, and
Open In Collab
Open In Collab ->: = [\n Tool(\n name="Search",\n func=docstore_explorer.search,\n description="Search for a term in the docstore.",\n ),\n Tool(\n name="Lookup",\n func=docstore_explorer.lookup,\n description="Lookup a term in the docstore.",\n ),\n]\nagent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\nreact = ReActChain(agent=agent, tools=tools)\n```\n\nKeep in mind that this is a simplified example and you might need to adapt it to your specific needs.'questions = [ "What is the class hierarchy?", "What classes are derived from the Chain class?", "What one improvement do you propose in code in relation to the class hierarchy for the Chain class?",]for question in questions: result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is the class hierarchy? **Answer**: The class hierarchy in object-oriented programming is the structure that forms when classes are derived from other classes. The derived class is a subclass of the base class also known as the superclass. This hierarchy is formed based on the concept of inheritance in object-oriented programming where a subclass inherits the properties and functionalities of the superclass. In the given context, we have the following examples of class hierarchies: 1. `BaseCallbackHandler --> <name>CallbackHandler` means `BaseCallbackHandler` is a base class and `<name>CallbackHandler` (like `AimCallbackHandler`, `ArgillaCallbackHandler` etc.) are derived classes that inherit from `BaseCallbackHandler`. 2. `BaseLoader --> <name>Loader` means `BaseLoader` is a base class and `<name>Loader` (like `TextLoader`, `UnstructuredFileLoader` etc.) are derived classes that inherit from `BaseLoader`. 3. `ToolMetaclass --> BaseTool --> <name>Tool` means `ToolMetaclass` is a base class, `BaseTool` is a derived class that inherits from `ToolMetaclass`, and
1,763
class that inherits from `ToolMetaclass`, and `<name>Tool` (like `AIPluginTool`, `BaseGraphQLTool` etc.) are further derived classes that inherit from `BaseTool`. -> **Question**: What classes are derived from the Chain class? **Answer**: The classes that are derived from the Chain class are: 1. LLMSummarizationCheckerChain 2. MapReduceChain 3. OpenAIModerationChain 4. NatBotChain 5. QAGenerationChain 6. QAWithSourcesChain 7. RetrievalQAWithSourcesChain 8. VectorDBQAWithSourcesChain 9. RetrievalQA 10. VectorDBQA 11. LLMRouterChain 12. MultiPromptChain 13. MultiRetrievalQAChain 14. MultiRouteChain 15. RouterChain 16. SequentialChain 17. SimpleSequentialChain 18. TransformChain 19. BaseConversationalRetrievalChain 20. ConstitutionalChain -> **Question**: What one improvement do you propose in code in relation to the class hierarchy for the Chain class? **Answer**: As an AI model, I don't have personal opinions. However, one suggestion could be to improve the documentation of the Chain class hierarchy. The current comments and docstrings provide some details but it could be helpful to include more explicit explanations about the hierarchy, roles of each subclass, and their relationships with one another. Also, incorporating UML diagrams or other visuals could help developers better understand the structure and interactions of the classes. The can look at the LangSmith trace to see what is happening under the hood:In particular, the code well structured and kept together in the retrieval outputThe retrieved code and chat history are passed to the LLM for answer distillationOpen source LLMs‚ÄãWe can use Code LLaMA via LLamaCPP or Ollama integration.Note: be sure to upgrade llama-cpp-python in order to use the new gguf file format.CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama2/bin/pip install -U llama-cpp-python --no-cache-dirCheck out the
Open In Collab
Open In Collab ->: class that inherits from `ToolMetaclass`, and `<name>Tool` (like `AIPluginTool`, `BaseGraphQLTool` etc.) are further derived classes that inherit from `BaseTool`. -> **Question**: What classes are derived from the Chain class? **Answer**: The classes that are derived from the Chain class are: 1. LLMSummarizationCheckerChain 2. MapReduceChain 3. OpenAIModerationChain 4. NatBotChain 5. QAGenerationChain 6. QAWithSourcesChain 7. RetrievalQAWithSourcesChain 8. VectorDBQAWithSourcesChain 9. RetrievalQA 10. VectorDBQA 11. LLMRouterChain 12. MultiPromptChain 13. MultiRetrievalQAChain 14. MultiRouteChain 15. RouterChain 16. SequentialChain 17. SimpleSequentialChain 18. TransformChain 19. BaseConversationalRetrievalChain 20. ConstitutionalChain -> **Question**: What one improvement do you propose in code in relation to the class hierarchy for the Chain class? **Answer**: As an AI model, I don't have personal opinions. However, one suggestion could be to improve the documentation of the Chain class hierarchy. The current comments and docstrings provide some details but it could be helpful to include more explicit explanations about the hierarchy, roles of each subclass, and their relationships with one another. Also, incorporating UML diagrams or other visuals could help developers better understand the structure and interactions of the classes. The can look at the LangSmith trace to see what is happening under the hood:In particular, the code well structured and kept together in the retrieval outputThe retrieved code and chat history are passed to the LLM for answer distillationOpen source LLMs‚ÄãWe can use Code LLaMA via LLamaCPP or Ollama integration.Note: be sure to upgrade llama-cpp-python in order to use the new gguf file format.CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama2/bin/pip install -U llama-cpp-python --no-cache-dirCheck out the
1,764
-U llama-cpp-python --no-cache-dirCheck out the latest code-llama models here.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlercallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf", n_ctx=5000, n_gpu_layers=1, n_batch=512, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama_model_loader: loaded meta data with 17 key-value pairs and 363 tensors from /Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf (version GGUF V1 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 5: blk.0.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 13824, 5120,
Open In Collab
Open In Collab ->: -U llama-cpp-python --no-cache-dirCheck out the latest code-llama models here.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlercallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf", n_ctx=5000, n_gpu_layers=1, n_batch=512, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama_model_loader: loaded meta data with 17 key-value pairs and 363 tensors from /Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf (version GGUF V1 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 5: blk.0.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 13824, 5120,
1,765
q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 14: blk.1.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 15: blk.1.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 17: blk.1.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 23: blk.2.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 24: blk.2.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 26: blk.2.ffn_down.weight
Open In Collab
Open In Collab ->: q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 14: blk.1.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 15: blk.1.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 17: blk.1.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 23: blk.2.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 24: blk.2.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 26: blk.2.ffn_down.weight
1,766
- tensor 26: blk.2.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 32: blk.3.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 33: blk.3.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 35: blk.3.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 41: blk.4.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 42: blk.4.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: -
Open In Collab
Open In Collab ->: - tensor 26: blk.2.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 32: blk.3.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 33: blk.3.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 35: blk.3.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 41: blk.4.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 42: blk.4.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: -
1,767
13824, 1, 1 ] llama_model_loader: - tensor 44: blk.4.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 50: blk.5.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 51: blk.5.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 59: blk.6.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 60: blk.6.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_gate.weight q4_K [ 5120,
Open In Collab
Open In Collab ->: 13824, 1, 1 ] llama_model_loader: - tensor 44: blk.4.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 50: blk.5.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 51: blk.5.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 59: blk.6.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 60: blk.6.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_gate.weight q4_K [ 5120,
1,768
blk.6.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 68: blk.7.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 69: blk.7.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 71: blk.7.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 77: blk.8.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 78: blk.8.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 79:
Open In Collab
Open In Collab ->: blk.6.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 68: blk.7.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 69: blk.7.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 71: blk.7.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 77: blk.8.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 78: blk.8.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 79:
1,769
1 ] llama_model_loader: - tensor 79: blk.8.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 86: blk.9.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 87: blk.9.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 95: blk.10.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 96: blk.10.attn_output.weight q4_K [ 5120, 5120, 1,
Open In Collab
Open In Collab ->: 1 ] llama_model_loader: - tensor 79: blk.8.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 86: blk.9.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 87: blk.9.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 95: blk.10.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 96: blk.10.attn_output.weight q4_K [ 5120, 5120, 1,
1,770
q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 98: blk.10.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 104: blk.11.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 105: blk.11.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 107: blk.11.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 113: blk.12.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 114: blk.12.attn_output.weight
Open In Collab
Open In Collab ->: q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 98: blk.10.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 104: blk.11.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 105: blk.11.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 107: blk.11.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 113: blk.12.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 114: blk.12.attn_output.weight
1,771
- tensor 114: blk.12.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 116: blk.12.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 122: blk.13.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 123: blk.13.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 125: blk.13.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 131: blk.14.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: -
Open In Collab
Open In Collab ->: - tensor 114: blk.12.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 116: blk.12.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 122: blk.13.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 123: blk.13.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 125: blk.13.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 131: blk.14.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: -
1,772
5120, 1, 1 ] llama_model_loader: - tensor 132: blk.14.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 134: blk.14.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 140: blk.15.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 141: blk.15.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 143: blk.15.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 149: blk.16.attn_v.weight q6_K [ 5120,
Open In Collab
Open In Collab ->: 5120, 1, 1 ] llama_model_loader: - tensor 132: blk.14.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 134: blk.14.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 140: blk.15.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 141: blk.15.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 143: blk.15.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 149: blk.16.attn_v.weight q6_K [ 5120,
1,773
blk.16.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 150: blk.16.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 152: blk.16.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 158: blk.17.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 159: blk.17.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 161: blk.17.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 167:
Open In Collab
Open In Collab ->: blk.16.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 150: blk.16.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 152: blk.16.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 158: blk.17.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 159: blk.17.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 161: blk.17.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 167:
1,774
1 ] llama_model_loader: - tensor 167: blk.18.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 168: blk.18.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 170: blk.18.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 176: blk.19.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 177: blk.19.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 179: blk.19.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_k.weight q4_K [ 5120, 5120, 1,
Open In Collab
Open In Collab ->: 1 ] llama_model_loader: - tensor 167: blk.18.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 168: blk.18.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 170: blk.18.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 176: blk.19.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 177: blk.19.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 179: blk.19.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_k.weight q4_K [ 5120, 5120, 1,
1,775
q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 185: blk.20.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 186: blk.20.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 188: blk.20.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 194: blk.21.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 195: blk.21.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 197: blk.21.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 202: blk.22.attn_k.weight
Open In Collab
Open In Collab ->: q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 185: blk.20.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 186: blk.20.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 188: blk.20.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 194: blk.21.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 195: blk.21.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 197: blk.21.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 202: blk.22.attn_k.weight
1,776
- tensor 202: blk.22.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 203: blk.22.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 204: blk.22.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 205: blk.22.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 206: blk.22.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 207: blk.22.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 208: blk.22.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 209: blk.22.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 210: blk.23.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 211: blk.23.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 212: blk.23.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 213: blk.23.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 214: blk.23.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 215: blk.23.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 216: blk.23.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 217: blk.23.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 218: blk.23.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 219: blk.24.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: -
Open In Collab
Open In Collab ->: - tensor 202: blk.22.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 203: blk.22.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 204: blk.22.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 205: blk.22.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 206: blk.22.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 207: blk.22.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 208: blk.22.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 209: blk.22.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 210: blk.23.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 211: blk.23.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 212: blk.23.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 213: blk.23.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 214: blk.23.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 215: blk.23.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 216: blk.23.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 217: blk.23.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 218: blk.23.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 219: blk.24.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: -
1,777
5120, 1, 1 ] llama_model_loader: - tensor 220: blk.24.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 221: blk.24.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 222: blk.24.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 223: blk.24.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 224: blk.24.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 225: blk.24.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 226: blk.24.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 227: blk.24.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 228: blk.25.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 229: blk.25.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 230: blk.25.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 231: blk.25.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 232: blk.25.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 233: blk.25.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 234: blk.25.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 235: blk.25.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 236: blk.25.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 237: blk.26.attn_q.weight q4_K [ 5120,
Open In Collab
Open In Collab ->: 5120, 1, 1 ] llama_model_loader: - tensor 220: blk.24.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 221: blk.24.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 222: blk.24.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 223: blk.24.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 224: blk.24.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 225: blk.24.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 226: blk.24.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 227: blk.24.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 228: blk.25.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 229: blk.25.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 230: blk.25.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 231: blk.25.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 232: blk.25.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 233: blk.25.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 234: blk.25.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 235: blk.25.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 236: blk.25.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 237: blk.26.attn_q.weight q4_K [ 5120,
1,778
blk.26.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 238: blk.26.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 239: blk.26.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 240: blk.26.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 241: blk.26.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 242: blk.26.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 243: blk.26.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 244: blk.26.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 245: blk.26.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 246: blk.27.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 247: blk.27.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 248: blk.27.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 249: blk.27.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 250: blk.27.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 251: blk.27.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 252: blk.27.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 253: blk.27.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 254: blk.27.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 255:
Open In Collab
Open In Collab ->: blk.26.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 238: blk.26.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 239: blk.26.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 240: blk.26.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 241: blk.26.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 242: blk.26.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 243: blk.26.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 244: blk.26.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 245: blk.26.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 246: blk.27.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 247: blk.27.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 248: blk.27.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 249: blk.27.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 250: blk.27.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 251: blk.27.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 252: blk.27.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 253: blk.27.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 254: blk.27.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 255:
1,779
1 ] llama_model_loader: - tensor 255: blk.28.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 256: blk.28.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 257: blk.28.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 258: blk.28.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 259: blk.28.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 260: blk.28.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 261: blk.28.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 262: blk.28.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 263: blk.28.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 264: blk.29.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 265: blk.29.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 266: blk.29.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 267: blk.29.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 268: blk.29.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 269: blk.29.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 270: blk.29.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 271: blk.29.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 272: blk.29.ffn_norm.weight f32 [ 5120, 1, 1,
Open In Collab
Open In Collab ->: 1 ] llama_model_loader: - tensor 255: blk.28.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 256: blk.28.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 257: blk.28.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 258: blk.28.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 259: blk.28.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 260: blk.28.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 261: blk.28.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 262: blk.28.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 263: blk.28.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 264: blk.29.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 265: blk.29.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 266: blk.29.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 267: blk.29.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 268: blk.29.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 269: blk.29.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 270: blk.29.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 271: blk.29.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 272: blk.29.ffn_norm.weight f32 [ 5120, 1, 1,
1,780
f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 273: blk.30.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 274: blk.30.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 275: blk.30.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 276: blk.30.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 277: blk.30.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 278: blk.30.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 279: blk.30.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 280: blk.30.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 281: blk.30.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 282: blk.31.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 283: blk.31.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 284: blk.31.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 285: blk.31.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 286: blk.31.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 287: blk.31.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 288: blk.31.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 289: blk.31.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 290: blk.31.ffn_norm.weight
Open In Collab
Open In Collab ->: f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 273: blk.30.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 274: blk.30.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 275: blk.30.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 276: blk.30.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 277: blk.30.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 278: blk.30.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 279: blk.30.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 280: blk.30.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 281: blk.30.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 282: blk.31.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 283: blk.31.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 284: blk.31.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 285: blk.31.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 286: blk.31.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 287: blk.31.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 288: blk.31.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 289: blk.31.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 290: blk.31.ffn_norm.weight
1,781
- tensor 290: blk.31.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 291: blk.32.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 292: blk.32.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 293: blk.32.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 294: blk.32.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 295: blk.32.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 296: blk.32.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 297: blk.32.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 298: blk.32.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 299: blk.32.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 300: blk.33.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 301: blk.33.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 302: blk.33.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 303: blk.33.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 304: blk.33.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 305: blk.33.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 306: blk.33.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 307: blk.33.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: -
Open In Collab
Open In Collab ->: - tensor 290: blk.31.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 291: blk.32.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 292: blk.32.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 293: blk.32.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 294: blk.32.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 295: blk.32.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 296: blk.32.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 297: blk.32.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 298: blk.32.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 299: blk.32.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 300: blk.33.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 301: blk.33.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 302: blk.33.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 303: blk.33.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 304: blk.33.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 305: blk.33.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 306: blk.33.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 307: blk.33.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: -
1,782
1, 1, 1 ] llama_model_loader: - tensor 308: blk.33.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 309: blk.34.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 310: blk.34.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 311: blk.34.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 312: blk.34.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 313: blk.34.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 314: blk.34.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 315: blk.34.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 316: blk.34.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 317: blk.34.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 318: blk.35.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 319: blk.35.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 320: blk.35.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 321: blk.35.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 322: blk.35.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 323: blk.35.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 324: blk.35.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 325: blk.35.attn_norm.weight f32 [ 5120,
Open In Collab
Open In Collab ->: 1, 1, 1 ] llama_model_loader: - tensor 308: blk.33.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 309: blk.34.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 310: blk.34.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 311: blk.34.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 312: blk.34.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 313: blk.34.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 314: blk.34.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 315: blk.34.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 316: blk.34.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 317: blk.34.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 318: blk.35.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 319: blk.35.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 320: blk.35.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 321: blk.35.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 322: blk.35.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 323: blk.35.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 324: blk.35.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 325: blk.35.attn_norm.weight f32 [ 5120,
1,783
blk.35.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 326: blk.35.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 327: blk.36.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 328: blk.36.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 329: blk.36.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 330: blk.36.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 331: blk.36.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 332: blk.36.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 333: blk.36.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 334: blk.36.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 335: blk.36.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 336: blk.37.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 337: blk.37.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 338: blk.37.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 339: blk.37.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 340: blk.37.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 341: blk.37.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 342: blk.37.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 343:
Open In Collab
Open In Collab ->: blk.35.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 326: blk.35.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 327: blk.36.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 328: blk.36.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 329: blk.36.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 330: blk.36.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 331: blk.36.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 332: blk.36.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 333: blk.36.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 334: blk.36.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 335: blk.36.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 336: blk.37.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 337: blk.37.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 338: blk.37.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 339: blk.37.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 340: blk.37.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 341: blk.37.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 342: blk.37.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 343:
1,784
1 ] llama_model_loader: - tensor 343: blk.37.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 344: blk.37.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 345: blk.38.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 346: blk.38.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 347: blk.38.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 348: blk.38.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 349: blk.38.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 350: blk.38.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 351: blk.38.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 352: blk.38.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 353: blk.38.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 354: blk.39.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 355: blk.39.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 356: blk.39.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 357: blk.39.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 358: blk.39.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 359: blk.39.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 360: blk.39.ffn_up.weight q4_K [ 5120, 13824, 1,
Open In Collab
Open In Collab ->: 1 ] llama_model_loader: - tensor 343: blk.37.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 344: blk.37.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 345: blk.38.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 346: blk.38.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 347: blk.38.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 348: blk.38.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 349: blk.38.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 350: blk.38.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 351: blk.38.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 352: blk.38.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 353: blk.38.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 354: blk.39.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 355: blk.39.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 356: blk.39.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 357: blk.39.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 358: blk.39.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 359: blk.39.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 360: blk.39.ffn_up.weight q4_K [ 5120, 13824, 1,
1,785
q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 361: blk.39.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 362: blk.39.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - kv 0: general.architecture str llama_model_loader: - kv 1: general.name str llama_model_loader: - kv 2: llama.context_length u32 llama_model_loader: - kv 3: llama.embedding_length u32 llama_model_loader: - kv 4: llama.block_count u32 llama_model_loader: - kv 5: llama.feed_forward_length u32 llama_model_loader: - kv 6: llama.rope.dimension_count u32 llama_model_loader: - kv 7: llama.attention.head_count u32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 llama_model_loader: - kv 10: llama.rope.freq_base f32 llama_model_loader: - kv 11: general.file_type u32 llama_model_loader: - kv 12: tokenizer.ggml.model str llama_model_loader: - kv 13: tokenizer.ggml.tokens arr llama_model_loader: - kv 14: tokenizer.ggml.scores arr llama_model_loader: - kv 15: tokenizer.ggml.token_type arr llama_model_loader: - kv 16: general.quantization_version u32 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type f16: 1 tensors llama_model_loader: - type q4_0: 1 tensors llama_model_loader: - type q4_K: 240 tensors llama_model_loader: - type q6_K: 40 tensors llm_load_print_meta: format = GGUF
Open In Collab
Open In Collab ->: q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 361: blk.39.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 362: blk.39.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - kv 0: general.architecture str llama_model_loader: - kv 1: general.name str llama_model_loader: - kv 2: llama.context_length u32 llama_model_loader: - kv 3: llama.embedding_length u32 llama_model_loader: - kv 4: llama.block_count u32 llama_model_loader: - kv 5: llama.feed_forward_length u32 llama_model_loader: - kv 6: llama.rope.dimension_count u32 llama_model_loader: - kv 7: llama.attention.head_count u32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 llama_model_loader: - kv 10: llama.rope.freq_base f32 llama_model_loader: - kv 11: general.file_type u32 llama_model_loader: - kv 12: tokenizer.ggml.model str llama_model_loader: - kv 13: tokenizer.ggml.tokens arr llama_model_loader: - kv 14: tokenizer.ggml.scores arr llama_model_loader: - kv 15: tokenizer.ggml.token_type arr llama_model_loader: - kv 16: general.quantization_version u32 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type f16: 1 tensors llama_model_loader: - type q4_0: 1 tensors llama_model_loader: - type q4_K: 240 tensors llama_model_loader: - type q6_K: 40 tensors llm_load_print_meta: format = GGUF
1,786
llm_load_print_meta: format = GGUF V1 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32016 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_ctx = 5000 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 40 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: freq_base = 1000000.0 llm_load_print_meta: freq_scale = 1 llm_load_print_meta: model type = 13B llm_load_print_meta: model ftype = mostly Q4_K - Medium llm_load_print_meta: model size = 13.02 B llm_load_print_meta: general.name = LLaMA llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MB llm_load_tensors: mem required = 7685.49 MB (+ 3906.25 MB per state) ................................................................................................. llama_new_context_with_model: kv self size = 3906.25 MB ggml_metal_init: allocating ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x12126dd00 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_add_row 0x12126d610 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul 0x12126f2a0 | th_max = 1024 | th_width = 32
Open In Collab
Open In Collab ->: llm_load_print_meta: format = GGUF V1 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32016 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_ctx = 5000 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 40 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: freq_base = 1000000.0 llm_load_print_meta: freq_scale = 1 llm_load_print_meta: model type = 13B llm_load_print_meta: model ftype = mostly Q4_K - Medium llm_load_print_meta: model size = 13.02 B llm_load_print_meta: general.name = LLaMA llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MB llm_load_tensors: mem required = 7685.49 MB (+ 3906.25 MB per state) ................................................................................................. llama_new_context_with_model: kv self size = 3906.25 MB ggml_metal_init: allocating ggml_metal_init: loading '/Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/llama_cpp/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x12126dd00 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_add_row 0x12126d610 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul 0x12126f2a0 | th_max = 1024 | th_width = 32
1,787
0x12126f2a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_row 0x12126f500 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_scale 0x12126f760 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_silu 0x12126fe40 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_relu 0x1212700a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_gelu 0x121270300 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_soft_max 0x121270560 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_diag_mask_inf 0x1212707c0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_f16 0x121270a20 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_0 0x121270c80 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_1 0x121270ee0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q8_0 0x121271140 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q2_K 0x1212713a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q3_K 0x121271600 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_K 0x121271860 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q5_K 0x121271ac0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q6_K 0x121271d20 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_rms_norm 0x121271f80 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_norm
Open In Collab
Open In Collab ->: 0x12126f2a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_row 0x12126f500 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_scale 0x12126f760 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_silu 0x12126fe40 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_relu 0x1212700a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_gelu 0x121270300 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_soft_max 0x121270560 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_diag_mask_inf 0x1212707c0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_f16 0x121270a20 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_0 0x121270c80 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_1 0x121270ee0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q8_0 0x121271140 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q2_K 0x1212713a0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q3_K 0x121271600 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q4_K 0x121271860 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q5_K 0x121271ac0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_get_rows_q6_K 0x121271d20 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_rms_norm 0x121271f80 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_norm
1,788
loaded kernel_norm 0x1212721e0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x121272440 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x1212726a0 | th_max = 896 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x121272900 | th_max = 896 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q8_0_f32 0x121272b60 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x121272dc0 | th_max = 640 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x121273020 | th_max = 704 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x121273280 | th_max = 576 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x1212734e0 | th_max = 576 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x121273740 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x1212739a0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_0_f32 0x121273c00 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q8_0_f32 0x121273e60 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_1_f32 0x1212740c0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q2_K_f32 0x121274320 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q3_K_f32 0x121274580 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_K_f32 0x1212747e0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q5_K_f32 0x121274a40 | th_max = 704 | th_width = 32 ggml_metal_init:
Open In Collab
Open In Collab ->: loaded kernel_norm 0x1212721e0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x121272440 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x1212726a0 | th_max = 896 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x121272900 | th_max = 896 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q8_0_f32 0x121272b60 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x121272dc0 | th_max = 640 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x121273020 | th_max = 704 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x121273280 | th_max = 576 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x1212734e0 | th_max = 576 | th_width = 32 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x121273740 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x1212739a0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_0_f32 0x121273c00 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q8_0_f32 0x121273e60 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_1_f32 0x1212740c0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q2_K_f32 0x121274320 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q3_K_f32 0x121274580 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q4_K_f32 0x1212747e0 | th_max = 768 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q5_K_f32 0x121274a40 | th_max = 704 | th_width = 32 ggml_metal_init:
1,789
= 704 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x121274ca0 | th_max = 704 | th_width = 32 ggml_metal_init: loaded kernel_rope 0x121274f00 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_alibi_f32 0x121275160 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f32_f16 0x1212753c0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f32_f32 0x121275620 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f16_f16 0x121275880 | th_max = 1024 | th_width = 32 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU llama_new_context_with_model: compute buffer total size = 442.03 MB llama_new_context_with_model: max tensor size = 312.66 MB ggml_metal_add_buffer: allocated 'data ' buffer, size = 7686.00 MB, (20243.77 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1.42 MB, (20245.19 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 3908.25 MB, (24153.44 / 21845.34), warning: current allocated size is greater than the recommended max working set size AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 440.64 MB, (24594.08 / 21845.34), warning: current allocated size is greater than the recommended max working set sizellm("Question: In bash, how do I list all the text files in the current directory that have been modified in the last month? Answer:") Llama.generate: prefix-match hit You can use the find command with a few
Open In Collab
Open In Collab ->: = 704 | th_width = 32 ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x121274ca0 | th_max = 704 | th_width = 32 ggml_metal_init: loaded kernel_rope 0x121274f00 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_alibi_f32 0x121275160 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f32_f16 0x1212753c0 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f32_f32 0x121275620 | th_max = 1024 | th_width = 32 ggml_metal_init: loaded kernel_cpy_f16_f16 0x121275880 | th_max = 1024 | th_width = 32 ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU llama_new_context_with_model: compute buffer total size = 442.03 MB llama_new_context_with_model: max tensor size = 312.66 MB ggml_metal_add_buffer: allocated 'data ' buffer, size = 7686.00 MB, (20243.77 / 21845.34) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1.42 MB, (20245.19 / 21845.34) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 3908.25 MB, (24153.44 / 21845.34), warning: current allocated size is greater than the recommended max working set size AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 440.64 MB, (24594.08 / 21845.34), warning: current allocated size is greater than the recommended max working set sizellm("Question: In bash, how do I list all the text files in the current directory that have been modified in the last month? Answer:") Llama.generate: prefix-match hit You can use the find command with a few
1,790
hit You can use the find command with a few options to this task. Here is an example of how you might go about it: find . -type f -mtime +28 -exec ls {} \; This command only for plain files (not), and limits the search to files that were more than 28 days ago, then the "ls" command on each file found. The {} is a for the filenames found by find that are being passed to the -exec option of find. You can also use find in with other unix utilities like sort and grep to the list of files before they are: find . -type f -mtime +28 | sort | grep pattern This will find all plain files that match a given pattern, then sort the listically and filter it for only the matches. Answer: `find` is pretty with its search. The should work as well: \begin{code} ls -l $(find . -mtime +28) \end{code} (It's a bad idea to parse output from `ls`, though, as you may llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 180.71 ms / 256 runs ( 0.71 ms per token, 1416.67 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 9593.04 ms / 256 runs ( 37.47 ms per token, 26.69 tokens per second) llama_print_timings: total time = 10139.91 ms ' You can use the find command with a few options to this task. Here is an example of how you might go about it:\n\nfind . -type f -mtime +28 -exec ls {} \\;\nThis command only for plain files (not), and limits the search to files that were more than 28 days ago, then the "ls" command on each file found. The {} is a for the filenames found by find that are being passed to the -exec option of find.\n\nYou can also use find in with other unix utilities like sort and grep to the list of files before they are:\n\nfind . -type f -mtime +28 | sort | grep pattern\nThis will find all plain files that
Open In Collab
Open In Collab ->: hit You can use the find command with a few options to this task. Here is an example of how you might go about it: find . -type f -mtime +28 -exec ls {} \; This command only for plain files (not), and limits the search to files that were more than 28 days ago, then the "ls" command on each file found. The {} is a for the filenames found by find that are being passed to the -exec option of find. You can also use find in with other unix utilities like sort and grep to the list of files before they are: find . -type f -mtime +28 | sort | grep pattern This will find all plain files that match a given pattern, then sort the listically and filter it for only the matches. Answer: `find` is pretty with its search. The should work as well: \begin{code} ls -l $(find . -mtime +28) \end{code} (It's a bad idea to parse output from `ls`, though, as you may llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 180.71 ms / 256 runs ( 0.71 ms per token, 1416.67 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 9593.04 ms / 256 runs ( 37.47 ms per token, 26.69 tokens per second) llama_print_timings: total time = 10139.91 ms ' You can use the find command with a few options to this task. Here is an example of how you might go about it:\n\nfind . -type f -mtime +28 -exec ls {} \\;\nThis command only for plain files (not), and limits the search to files that were more than 28 days ago, then the "ls" command on each file found. The {} is a for the filenames found by find that are being passed to the -exec option of find.\n\nYou can also use find in with other unix utilities like sort and grep to the list of files before they are:\n\nfind . -type f -mtime +28 | sort | grep pattern\nThis will find all plain files that
1,791
grep pattern\nThis will find all plain files that match a given pattern, then sort the listically and filter it for only the matches.\n\nAnswer: `find` is pretty with its search. The should work as well:\n\n\\begin{code}\nls -l $(find . -mtime +28)\n\\end{code}\n\n(It\'s a bad idea to parse output from `ls`, though, as you may'from langchain.chains.question_answering import load_qa_chain# Prompttemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. {context}Question: {question}Helpful Answer:"""QA_CHAIN_PROMPT = PromptTemplate( input_variables=["context", "question"], template=template,)We can also use the LangChain Prompt Hub to store and fetch prompts.This will work with your LangSmith API key.Let's try with a default RAG prompt, here.from langchain import hubQA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-default")# Docsquestion = "How can I initialize a ReAct agent?"docs = retriever.get_relevant_documents(question)# Chainchain = load_qa_chain(llm, chain_type="stuff", prompt=QA_CHAIN_PROMPT)# Runchain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool: ```python from langchain.agents.react import ReActAgent from langchain.tools.lookup import Lookup from langchain.tools.search import Search ReActAgent(Lookup(), Search()) ``` llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 65.46 ms / 94 runs ( 0.70 ms per token, 1435.95 tokens per second) llama_print_timings: prompt eval time = 15975.57 ms / 1408 tokens ( 11.35 ms per token, 88.13 tokens per second)
Open In Collab
Open In Collab ->: grep pattern\nThis will find all plain files that match a given pattern, then sort the listically and filter it for only the matches.\n\nAnswer: `find` is pretty with its search. The should work as well:\n\n\\begin{code}\nls -l $(find . -mtime +28)\n\\end{code}\n\n(It\'s a bad idea to parse output from `ls`, though, as you may'from langchain.chains.question_answering import load_qa_chain# Prompttemplate = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. {context}Question: {question}Helpful Answer:"""QA_CHAIN_PROMPT = PromptTemplate( input_variables=["context", "question"], template=template,)We can also use the LangChain Prompt Hub to store and fetch prompts.This will work with your LangSmith API key.Let's try with a default RAG prompt, here.from langchain import hubQA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-default")# Docsquestion = "How can I initialize a ReAct agent?"docs = retriever.get_relevant_documents(question)# Chainchain = load_qa_chain(llm, chain_type="stuff", prompt=QA_CHAIN_PROMPT)# Runchain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool: ```python from langchain.agents.react import ReActAgent from langchain.tools.lookup import Lookup from langchain.tools.search import Search ReActAgent(Lookup(), Search()) ``` llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 65.46 ms / 94 runs ( 0.70 ms per token, 1435.95 tokens per second) llama_print_timings: prompt eval time = 15975.57 ms / 1408 tokens ( 11.35 ms per token, 88.13 tokens per second)
1,792
ms per token, 88.13 tokens per second) llama_print_timings: eval time = 4772.57 ms / 93 runs ( 51.32 ms per token, 19.49 tokens per second) llama_print_timings: total time = 20959.57 ms {'output_text': ' You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool:\n```python\nfrom langchain.agents.react import ReActAgent\nfrom langchain.tools.lookup import Lookup\nfrom langchain.tools.search import Search\nReActAgent(Lookup(), Search())\n```'}Here's the trace RAG, showing the retrieved docs.PreviousRetrieval-augmented generation (RAG)NextUsing a RetrieverUse caseOverviewQuickstartLoadingSplittingRetrievalQAChatOpen source LLMsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Collab
Open In Collab ->: ms per token, 88.13 tokens per second) llama_print_timings: eval time = 4772.57 ms / 93 runs ( 51.32 ms per token, 19.49 tokens per second) llama_print_timings: total time = 20959.57 ms {'output_text': ' You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool:\n```python\nfrom langchain.agents.react import ReActAgent\nfrom langchain.tools.lookup import Lookup\nfrom langchain.tools.search import Search\nReActAgent(Lookup(), Search())\n```'}Here's the trace RAG, showing the retrieved docs.PreviousRetrieval-augmented generation (RAG)NextUsing a RetrieverUse caseOverviewQuickstartLoadingSplittingRetrievalQAChatOpen source LLMsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,793
Graph querying | 🦜�🔗 Langchain
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs. ->: Graph querying | 🦜�🔗 Langchain
1,794
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingGraph queryingGraph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.📄� Diffbot Graph TransformerOpen In Colab📄� ArangoDB QA chainOpen In Colab📄� Neo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.📄� FalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.📄� HugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.📄� KuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.📄� Memgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.📄� NebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.📄� NetworkX Graph QAThis notebook goes over how to do question answering over a graph data structure.📄� GraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingGraph queryingGraph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.📄� Diffbot Graph TransformerOpen In Colab📄� ArangoDB QA chainOpen In Colab📄� Neo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.📄� FalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.📄� HugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.📄� KuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.📄� Memgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.📄� NebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.📄� NetworkX Graph QAThis notebook goes over how to do question answering over a graph data structure.📄� GraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query
1,795
cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\📄� Neptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsePreviousSynthetic data generationNextDiffbot Graph TransformerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs. ->: cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\📄� Neptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsePreviousSynthetic data generationNextDiffbot Graph TransformerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,796
NebulaGraphQAChain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database. ->: NebulaGraphQAChain | 🦜️🔗 Langchain
1,797
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNebulaGraphQAChainOn this pageNebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:curl -fsSL nebula-up.siwei.io/install.sh | bashOther options are:Install as a Docker Desktop Extension. See hereNebulaGraph Cloud Service. See hereDeploy from package, source code, or via Kubernetes. See hereOnce the cluster is running, we could create the SPACE and SCHEMA for the database.# connect ngql jupyter extension to nebulagraph# create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));# Wait for a few seconds for the space to be created.%ngql USE langchain;Create the schema, for full dataset, refer here.CREATE TAG IF NOT EXISTS movie(name string);CREATE TAG IF NOT EXISTS person(name string, birthdate string);CREATE EDGE IF NOT EXISTS acted_in();CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));Wait for schema creation to complete, then we can insert some data.INSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25");INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II");INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNebulaGraphQAChainOn this pageNebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:curl -fsSL nebula-up.siwei.io/install.sh | bashOther options are:Install as a Docker Desktop Extension. See hereNebulaGraph Cloud Service. See hereDeploy from package, source code, or via Kubernetes. See hereOnce the cluster is running, we could create the SPACE and SCHEMA for the database.# connect ngql jupyter extension to nebulagraph# create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));# Wait for a few seconds for the space to be created.%ngql USE langchain;Create the schema, for full dataset, refer here.CREATE TAG IF NOT EXISTS movie(name string);CREATE TAG IF NOT EXISTS person(name string, birthdate string);CREATE EDGE IF NOT EXISTS acted_in();CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));Wait for schema creation to complete, then we can insert some data.INSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25");INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II");INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael
1,798
Godfather Coda: The Death of Michael Corleone");INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":();INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); UsageError: Cell magic `%%ngql` not found.from langchain.chat_models import ChatOpenAIfrom langchain.chains import NebulaGraphQAChainfrom langchain.graphs import NebulaGraphgraph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30,)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate nGQL statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}] Edge properties: [{'edge': 'acted_in', 'properties': []}] Relationships: ['(:person)-[:acted_in]->(:movie)'] Querying the graph​We can now use the graph cypher QA chain to ask question of the graphchain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather II?") > Entering new NebulaGraphQAChain chain... Generated nGQL: MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II' RETURN p.`person`.`name` Full Context: {'p.person.name': ['Al Pacino']} > Finished chain. 'Al Pacino played in The Godfather II.'PreviousMemgraph QA chainNextNetworkX Graph QARefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.
This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database. ->: Godfather Coda: The Death of Michael Corleone");INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":();INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); UsageError: Cell magic `%%ngql` not found.from langchain.chat_models import ChatOpenAIfrom langchain.chains import NebulaGraphQAChainfrom langchain.graphs import NebulaGraphgraph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30,)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate nGQL statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}] Edge properties: [{'edge': 'acted_in', 'properties': []}] Relationships: ['(:person)-[:acted_in]->(:movie)'] Querying the graph​We can now use the graph cypher QA chain to ask question of the graphchain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather II?") > Entering new NebulaGraphQAChain chain... Generated nGQL: MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II' RETURN p.`person`.`name` Full Context: {'p.person.name': ['Al Pacino']} > Finished chain. 'Al Pacino played in The Godfather II.'PreviousMemgraph QA chainNextNetworkX Graph QARefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,799
Neptune Open Cypher QA Chain | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNeptune Open Cypher QA ChainNeptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsefrom langchain.graphs import NeptuneGraphhost = "<neptune-host>"port = 8182use_https = Truegraph = NeptuneGraph(host=host, port=port, use_https=use_https)from langchain.chat_models import ChatOpenAIfrom langchain.chains import NeptuneOpenCypherQAChainllm = ChatOpenAI(temperature=0, model="gpt-4")chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)chain.run("how many outgoing routes does the Austin airport have?") 'The Austin airport has 98 outgoing routes.'PreviousGraphSparqlQAChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This QA chain queries Neptune graph database using openCypher and returns human readable response
This QA chain queries Neptune graph database using openCypher and returns human readable response ->: Neptune Open Cypher QA Chain | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNeptune Open Cypher QA ChainNeptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsefrom langchain.graphs import NeptuneGraphhost = "<neptune-host>"port = 8182use_https = Truegraph = NeptuneGraph(host=host, port=port, use_https=use_https)from langchain.chat_models import ChatOpenAIfrom langchain.chains import NeptuneOpenCypherQAChainllm = ChatOpenAI(temperature=0, model="gpt-4")chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)chain.run("how many outgoing routes does the Austin airport have?") 'The Austin airport has 98 outgoing routes.'PreviousGraphSparqlQAChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.