Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
1,800
NetworkX Graph QA | 🦜️🔗 Langchain
This notebook goes over how to do question answering over a graph data structure.
This notebook goes over how to do question answering over a graph data structure. ->: NetworkX Graph QA | 🦜️🔗 Langchain
1,801
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNetworkX Graph QAOn this pageNetworkX Graph QAThis notebook goes over how to do question answering over a graph data structure.Create the graph​In this section, we construct an example graph. At the moment, this works best for small pieces of text.from langchain.indexes import GraphIndexCreatorfrom langchain.llms import OpenAIfrom langchain.document_loaders import TextLoaderindex_creator = GraphIndexCreator(llm=OpenAI(temperature=0))with open("../../../modules/state_of_the_union.txt") as f: all_text = f.read()We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.text = "\n".join(all_text.split("\n\n")[105:108])text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. 'graph = index_creator.from_text(text)We can inspect the created graph.graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground
This notebook goes over how to do question answering over a graph data structure.
This notebook goes over how to do question answering over a graph data structure. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNetworkX Graph QAOn this pageNetworkX Graph QAThis notebook goes over how to do question answering over a graph data structure.Create the graph​In this section, we construct an example graph. At the moment, this works best for small pieces of text.from langchain.indexes import GraphIndexCreatorfrom langchain.llms import OpenAIfrom langchain.document_loaders import TextLoaderindex_creator = GraphIndexCreator(llm=OpenAI(temperature=0))with open("../../../modules/state_of_the_union.txt") as f: all_text = f.read()We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.text = "\n".join(all_text.split("\n\n")[105:108])text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. 'graph = index_creator.from_text(text)We can inspect the created graph.graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground
1,802
future will be built", 'is the ground on which')]Querying the graph​We can now use the graph QA chain to ask question of the graphfrom langchain.chains import GraphQAChainchain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'Save the graph​We can also save and load the graph.graph.write_to_gml("graph.gml")from langchain.indexes.graph import NetworkxEntityGraphloaded_graph = NetworkxEntityGraph.from_gml("graph.gml")loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]PreviousNebulaGraphQAChainNextGraphSparqlQAChainCreate the graphQuerying the graphSave the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to do question answering over a graph data structure.
This notebook goes over how to do question answering over a graph data structure. ->: future will be built", 'is the ground on which')]Querying the graph​We can now use the graph QA chain to ask question of the graphfrom langchain.chains import GraphQAChainchain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'Save the graph​We can also save and load the graph.graph.write_to_gml("graph.gml")from langchain.indexes.graph import NetworkxEntityGraphloaded_graph = NetworkxEntityGraph.from_gml("graph.gml")loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]PreviousNebulaGraphQAChainNextGraphSparqlQAChainCreate the graphQuerying the graphSave the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,803
FalkorDBQAChain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database. ->: FalkorDBQAChain | 🦜️🔗 Langchain
1,804
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingFalkorDBQAChainOn this pageFalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.FalkorDB is a low latency property graph database management system. You can simply run its docker locally:docker run -p 6379:6379 -it --rm falkordb/falkordb:edgeOnce launched, you can simply start creating a database on the local machine and connect to it.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import FalkorDBGraphfrom langchain.chains import FalkorDBQAChainCreate a graph connection and insert some demo data.​graph = FalkorDBGraph(database="movies")graph.query(""" CREATE (al:Person {name: 'Al Pacino', birthDate: '1940-04-25'}), (robert:Person {name: 'Robert De Niro', birthDate: '1943-08-17'}), (tom:Person {name: 'Tom Cruise', birthDate: '1962-07-3'}), (val:Person {name: 'Val Kilmer', birthDate: '1959-12-31'}), (anthony:Person {name: 'Anthony Edwards', birthDate: '1962-7-19'}), (meg:Person {name: 'Meg Ryan', birthDate: '1961-11-19'}), (god1:Movie {title: 'The Godfather'}), (god2:Movie {title: 'The Godfather: Part II'}), (god3:Movie {title: 'The Godfather Coda: The Death of Michael Corleone'}), (top:Movie {title: 'Top Gun'}), (al)-[:ACTED_IN]->(god1), (al)-[:ACTED_IN]->(god2), (al)-[:ACTED_IN]->(god3), (robert)-[:ACTED_IN]->(god2), (tom)-[:ACTED_IN]->(top), (val)-[:ACTED_IN]->(top),
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingFalkorDBQAChainOn this pageFalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.FalkorDB is a low latency property graph database management system. You can simply run its docker locally:docker run -p 6379:6379 -it --rm falkordb/falkordb:edgeOnce launched, you can simply start creating a database on the local machine and connect to it.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import FalkorDBGraphfrom langchain.chains import FalkorDBQAChainCreate a graph connection and insert some demo data.​graph = FalkorDBGraph(database="movies")graph.query(""" CREATE (al:Person {name: 'Al Pacino', birthDate: '1940-04-25'}), (robert:Person {name: 'Robert De Niro', birthDate: '1943-08-17'}), (tom:Person {name: 'Tom Cruise', birthDate: '1962-07-3'}), (val:Person {name: 'Val Kilmer', birthDate: '1959-12-31'}), (anthony:Person {name: 'Anthony Edwards', birthDate: '1962-7-19'}), (meg:Person {name: 'Meg Ryan', birthDate: '1961-11-19'}), (god1:Movie {title: 'The Godfather'}), (god2:Movie {title: 'The Godfather: Part II'}), (god3:Movie {title: 'The Godfather Coda: The Death of Michael Corleone'}), (top:Movie {title: 'Top Gun'}), (al)-[:ACTED_IN]->(god1), (al)-[:ACTED_IN]->(god2), (al)-[:ACTED_IN]->(god3), (robert)-[:ACTED_IN]->(god2), (tom)-[:ACTED_IN]->(top), (val)-[:ACTED_IN]->(top),
1,805
(val)-[:ACTED_IN]->(top), (anthony)-[:ACTED_IN]->(top), (meg)-[:ACTED_IN]->(top)""") []Creating FalkorDBQAChain‚Äãgraph.refresh_schema()print(graph.schema)import osos.environ['OPENAI_API_KEY']='API_KEY_HERE' Node properties: [[OrderedDict([('label', None), ('properties', ['name', 'birthDate', 'title'])])]] Relationships properties: [[OrderedDict([('type', None), ('properties', [])])]] Relationships: [['(:Person)-[:ACTED_IN]->(:Movie)']] chain = FalkorDBQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)Querying the graph‚Äãchain.run("Who played in Top Gun?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[:ACTED_IN]->(m:Movie) WHERE m.title = 'Top Gun' RETURN p.name Full Context: [['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan'], ['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan']] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[r:ACTED_IN]->(m:Movie) WHERE m.title = 'The Godfather: Part II' RETURN p.name ORDER BY p.birthDate ASC LIMIT 1 Full Context: [['Al Pacino']] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'chain.run("Robert De Niro played in which movies?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ACTED_IN]->(m:Movie) RETURN m.title Full Context: [['The Godfather: Part II'], ['The Godfather: Part II']] > Finished chain. 'Robert De Niro played in "The Godfather: Part II".'PreviousNeo4j DB QA chainNextHugeGraph QA ChainCreate a graph connection and insert some demo data.Creating FalkorDBQAChainQuerying the
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database. ->: (val)-[:ACTED_IN]->(top), (anthony)-[:ACTED_IN]->(top), (meg)-[:ACTED_IN]->(top)""") []Creating FalkorDBQAChain‚Äãgraph.refresh_schema()print(graph.schema)import osos.environ['OPENAI_API_KEY']='API_KEY_HERE' Node properties: [[OrderedDict([('label', None), ('properties', ['name', 'birthDate', 'title'])])]] Relationships properties: [[OrderedDict([('type', None), ('properties', [])])]] Relationships: [['(:Person)-[:ACTED_IN]->(:Movie)']] chain = FalkorDBQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)Querying the graph‚Äãchain.run("Who played in Top Gun?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[:ACTED_IN]->(m:Movie) WHERE m.title = 'Top Gun' RETURN p.name Full Context: [['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan'], ['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan']] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[r:ACTED_IN]->(m:Movie) WHERE m.title = 'The Godfather: Part II' RETURN p.name ORDER BY p.birthDate ASC LIMIT 1 Full Context: [['Al Pacino']] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'chain.run("Robert De Niro played in which movies?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ACTED_IN]->(m:Movie) RETURN m.title Full Context: [['The Godfather: Part II'], ['The Godfather: Part II']] > Finished chain. 'Robert De Niro played in "The Godfather: Part II".'PreviousNeo4j DB QA chainNextHugeGraph QA ChainCreate a graph connection and insert some demo data.Creating FalkorDBQAChainQuerying the
1,806
demo data.Creating FalkorDBQAChainQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database. ->: demo data.Creating FalkorDBQAChainQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,807
Memgraph QA chain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: Memgraph QA chain | 🦜️🔗 Langchain
1,808
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingMemgraph QA chainOn this pageMemgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.To follow along with this tutorial, ensure you have a running Memgraph instance. You can download and run it in a local Docker container by executing the following script:docker run \ -it \ -p 7687:7687 \ -p 7444:7444 \ -p 3000:3000 \ -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" \ -v mg_lib:/var/lib/memgraph memgraph/memgraph-platformYou will need to wait a few seconds for the database to start. If the process completes successfully, you should see something like this:mgconsole X.XConnected to 'memgraph://127.0.0.1:7687'Type :help for shell usageQuit the shell by typing Ctrl-D(eof) or :quitmemgraph>Now you can start playing with Memgraph!Begin by installing and importing all the necessary packages. We'll use the package manager called pip, along with the --user flag, to ensure proper permissions. If you've installed Python 3.4 or a later version, pip is included by default. You can install all the required packages using the following command:pip install langchain openai neo4j gqlalchemy --userYou can either run the provided code blocks in this notebook or use a separate Python file to experiment with Memgraph and LangChain.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingMemgraph QA chainOn this pageMemgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.To follow along with this tutorial, ensure you have a running Memgraph instance. You can download and run it in a local Docker container by executing the following script:docker run \ -it \ -p 7687:7687 \ -p 7444:7444 \ -p 3000:3000 \ -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" \ -v mg_lib:/var/lib/memgraph memgraph/memgraph-platformYou will need to wait a few seconds for the database to start. If the process completes successfully, you should see something like this:mgconsole X.XConnected to 'memgraph://127.0.0.1:7687'Type :help for shell usageQuit the shell by typing Ctrl-D(eof) or :quitmemgraph>Now you can start playing with Memgraph!Begin by installing and importing all the necessary packages. We'll use the package manager called pip, along with the --user flag, to ensure proper permissions. If you've installed Python 3.4 or a later version, pip is included by default. You can install all the required packages using the following command:pip install langchain openai neo4j gqlalchemy --userYou can either run the provided code blocks in this notebook or use a separate Python file to experiment with Memgraph and LangChain.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom
1,809
langchain.chains import GraphCypherQAChainfrom langchain.graphs import MemgraphGraphfrom langchain.prompts import PromptTemplatefrom gqlalchemy import Memgraphimport osWe're utilizing the Python library GQLAlchemy to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows:memgraph = Memgraph(host='127.0.0.1', port=7687)Populating the database‚ÄãYou can effortlessly populate your new, empty database using the Cypher query language. Don't worry if you don't grasp every line just yet, you can learn Cypher from the documentation here. Running the following script will execute a seeding query on the database, giving us data about a video game, including details like the publisher, available platforms, and genres. This data will serve as a basis for our work.# Creating and executing the seeding queryquery = """ MERGE (g:Game {name: "Baldur's Gate 3"}) WITH g, ["PlayStation 5", "Mac OS", "Windows", "Xbox Series X/S"] AS platforms, ["Adventure", "Role-Playing Game", "Strategy"] AS genres FOREACH (platform IN platforms | MERGE (p:Platform {name: platform}) MERGE (g)-[:AVAILABLE_ON]->(p) ) FOREACH (genre IN genres | MERGE (gn:Genre {name: genre}) MERGE (g)-[:HAS_GENRE]->(gn) ) MERGE (p:Publisher {name: "Larian Studios"}) MERGE (g)-[:PUBLISHED_BY]->(p);"""memgraph.execute(query)Refresh graph schema‚ÄãYou're all set to instantiate the Memgraph-LangChain graph using the following script. This interface will allow us to query our database using LangChain, automatically creating the required graph schema for generating Cypher queries through LLM.graph = MemgraphGraph(url="bolt://localhost:7687", username="", password="")If necessary, you can manually refresh the graph schema as follows.graph.refresh_schema()To familiarize yourself with the data and verify the updated graph schema, you can print it using the following
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: langchain.chains import GraphCypherQAChainfrom langchain.graphs import MemgraphGraphfrom langchain.prompts import PromptTemplatefrom gqlalchemy import Memgraphimport osWe're utilizing the Python library GQLAlchemy to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows:memgraph = Memgraph(host='127.0.0.1', port=7687)Populating the database‚ÄãYou can effortlessly populate your new, empty database using the Cypher query language. Don't worry if you don't grasp every line just yet, you can learn Cypher from the documentation here. Running the following script will execute a seeding query on the database, giving us data about a video game, including details like the publisher, available platforms, and genres. This data will serve as a basis for our work.# Creating and executing the seeding queryquery = """ MERGE (g:Game {name: "Baldur's Gate 3"}) WITH g, ["PlayStation 5", "Mac OS", "Windows", "Xbox Series X/S"] AS platforms, ["Adventure", "Role-Playing Game", "Strategy"] AS genres FOREACH (platform IN platforms | MERGE (p:Platform {name: platform}) MERGE (g)-[:AVAILABLE_ON]->(p) ) FOREACH (genre IN genres | MERGE (gn:Genre {name: genre}) MERGE (g)-[:HAS_GENRE]->(gn) ) MERGE (p:Publisher {name: "Larian Studios"}) MERGE (g)-[:PUBLISHED_BY]->(p);"""memgraph.execute(query)Refresh graph schema‚ÄãYou're all set to instantiate the Memgraph-LangChain graph using the following script. This interface will allow us to query our database using LangChain, automatically creating the required graph schema for generating Cypher queries through LLM.graph = MemgraphGraph(url="bolt://localhost:7687", username="", password="")If necessary, you can manually refresh the graph schema as follows.graph.refresh_schema()To familiarize yourself with the data and verify the updated graph schema, you can print it using the following
1,810
schema, you can print it using the following statement.print(graph.schema)Node properties are the following:Node name: 'Game', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Platform', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Genre', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}]Relationship properties are the following:The relationships are the following:['(:Game)-[:AVAILABLE_ON]->(:Platform)']['(:Game)-[:HAS_GENRE]->(:Genre)']['(:Game)-[:PUBLISHED_BY]->(:Publisher)']Querying the database‚ÄãTo interact with the OpenAI API, you must configure your API key as an environment variable using the Python os package. This ensures proper authorization for your requests. You can find more information on obtaining your API key here.os.environ["OPENAI_API_KEY"] = "your-key-here"You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like GPT-4 for notably improved Cypher queries and outcomes. We'll utilize the OpenAI chat, utilizing the key you previously configured. We'll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we'll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')Now you can start asking questions!response = chain.run("Which platforms is Baldur's Gate 3 available on?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform)RETURN p.nameFull Context:[{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'},
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: schema, you can print it using the following statement.print(graph.schema)Node properties are the following:Node name: 'Game', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Platform', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Genre', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}]Relationship properties are the following:The relationships are the following:['(:Game)-[:AVAILABLE_ON]->(:Platform)']['(:Game)-[:HAS_GENRE]->(:Genre)']['(:Game)-[:PUBLISHED_BY]->(:Publisher)']Querying the database‚ÄãTo interact with the OpenAI API, you must configure your API key as an environment variable using the Python os package. This ensures proper authorization for your requests. You can find more information on obtaining your API key here.os.environ["OPENAI_API_KEY"] = "your-key-here"You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like GPT-4 for notably improved Cypher queries and outcomes. We'll utilize the OpenAI chat, utilizing the key you previously configured. We'll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we'll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')Now you can start asking questions!response = chain.run("Which platforms is Baldur's Gate 3 available on?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform)RETURN p.nameFull Context:[{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'},
1,811
'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}]> Finished chain.Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S.response = chain.run("Is Baldur's Gate 3 available on Windows?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'})RETURN trueFull Context:[{'true': True}]> Finished chain.Yes, Baldur's Gate 3 is available on Windows.Chain modifiers‚ÄãTo modify the behavior of your chain and obtain more context or additional information, you can modify the chain's parameters.Return direct query results‚ÄãThe return_direct modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response.# Return the result of querying the graph directlychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)response = chain.run("Which studio published Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher)RETURN p.name> Finished chain.[{'p.name': 'Larian Studios'}]Return query intermediate steps‚ÄãThe return_intermediate_steps chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result.# Return all the intermediate steps of query executionchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)response = chain("Is Baldur's Gate 3 an Adventure game?")print(f"Intermediate steps: {response['intermediate_steps']}")print(f"Final response: {response['result']}")> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})RETURN g, genreFull Context:[{'g':
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: 'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}]> Finished chain.Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S.response = chain.run("Is Baldur's Gate 3 available on Windows?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'})RETURN trueFull Context:[{'true': True}]> Finished chain.Yes, Baldur's Gate 3 is available on Windows.Chain modifiers‚ÄãTo modify the behavior of your chain and obtain more context or additional information, you can modify the chain's parameters.Return direct query results‚ÄãThe return_direct modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response.# Return the result of querying the graph directlychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)response = chain.run("Which studio published Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher)RETURN p.name> Finished chain.[{'p.name': 'Larian Studios'}]Return query intermediate steps‚ÄãThe return_intermediate_steps chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result.# Return all the intermediate steps of query executionchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)response = chain("Is Baldur's Gate 3 an Adventure game?")print(f"Intermediate steps: {response['intermediate_steps']}")print(f"Final response: {response['result']}")> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})RETURN g, genreFull Context:[{'g':
1,812
'Adventure'})RETURN g, genreFull Context:[{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]> Finished chain.Intermediate steps: [{'query': "MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\nRETURN g, genre"}, {'context': [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]}]Final response: Yes, Baldur's Gate 3 is an Adventure game.Limit the number of query results‚ÄãThe top_k modifier can be used when you want to restrict the maximum number of query results.# Limit the maximum number of results returned by querychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)response = chain.run("What genres are associated with Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(g:Genre)RETURN g.nameFull Context:[{'g.name': 'Adventure'}, {'g.name': 'Role-Playing Game'}]> Finished chain.Baldur's Gate 3 is associated with the genres Adventure and Role-Playing Game.Advanced queryingAs the complexity of your solution grows, you might encounter different use-cases that require careful handling. Ensuring your application's scalability is essential to maintain a smooth user flow without any hitches.Let's instantiate our chain once again and attempt to ask some questions that users might potentially ask.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PS5'})RETURN g.name, p.nameFull Context:[]> Finished chain.I'm sorry, but I don't have the information to answer your question.The generated Cypher query looks fine, but we didn't receive any information in response. This illustrates a
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: 'Adventure'})RETURN g, genreFull Context:[{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]> Finished chain.Intermediate steps: [{'query': "MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\nRETURN g, genre"}, {'context': [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]}]Final response: Yes, Baldur's Gate 3 is an Adventure game.Limit the number of query results‚ÄãThe top_k modifier can be used when you want to restrict the maximum number of query results.# Limit the maximum number of results returned by querychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)response = chain.run("What genres are associated with Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(g:Genre)RETURN g.nameFull Context:[{'g.name': 'Adventure'}, {'g.name': 'Role-Playing Game'}]> Finished chain.Baldur's Gate 3 is associated with the genres Adventure and Role-Playing Game.Advanced queryingAs the complexity of your solution grows, you might encounter different use-cases that require careful handling. Ensuring your application's scalability is essential to maintain a smooth user flow without any hitches.Let's instantiate our chain once again and attempt to ask some questions that users might potentially ask.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PS5'})RETURN g.name, p.nameFull Context:[]> Finished chain.I'm sorry, but I don't have the information to answer your question.The generated Cypher query looks fine, but we didn't receive any information in response. This illustrates a
1,813
any information in response. This illustrates a common challenge when working with LLMs - the misalignment between how users phrase queries and how data is stored. In this case, the difference between user perception and the actual data storage can cause mismatches. Prompt refinement, the process of honing the model's prompts to better grasp these distinctions, is an efficient solution that tackles this issue. Through prompt refinement, the model gains increased proficiency in generating precise and pertinent queries, leading to the successful retrieval of the desired data.Prompt refinement‚ÄãTo address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain PromptTemplate, creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance.CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.If the user asks about PS5, Play Station 5 or PS 5, that is the platform called PlayStation 5.The question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), cypher_prompt=CYPHER_GENERATION_PROMPT, graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: any information in response. This illustrates a common challenge when working with LLMs - the misalignment between how users phrase queries and how data is stored. In this case, the difference between user perception and the actual data storage can cause mismatches. Prompt refinement, the process of honing the model's prompts to better grasp these distinctions, is an efficient solution that tackles this issue. Through prompt refinement, the model gains increased proficiency in generating precise and pertinent queries, leading to the successful retrieval of the desired data.Prompt refinement‚ÄãTo address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain PromptTemplate, creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance.CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.If the user asks about PS5, Play Station 5 or PS 5, that is the platform called PlayStation 5.The question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), cypher_prompt=CYPHER_GENERATION_PROMPT, graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain
1,814
Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PlayStation 5'})RETURN g.name, p.nameFull Context:[{'g.name': "Baldur's Gate 3", 'p.name': 'PlayStation 5'}]> Finished chain.Yes, Baldur's Gate 3 is available on PlayStation 5.Now, with the revised initial Cypher prompt that includes guidance on platform naming, we are obtaining accurate and relevant results that align more closely with user queries. This approach allows for further improvement of your QA chain. You can effortlessly integrate extra prompt refinement data into your chain, thereby enhancing the overall user experience of your app.PreviousKuzuQAChainNextNebulaGraphQAChainPopulating the databaseRefresh graph schemaQuerying the databaseChain modifiersPrompt refinementCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.
This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed. ->: Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PlayStation 5'})RETURN g.name, p.nameFull Context:[{'g.name': "Baldur's Gate 3", 'p.name': 'PlayStation 5'}]> Finished chain.Yes, Baldur's Gate 3 is available on PlayStation 5.Now, with the revised initial Cypher prompt that includes guidance on platform naming, we are obtaining accurate and relevant results that align more closely with user queries. This approach allows for further improvement of your QA chain. You can effortlessly integrate extra prompt refinement data into your chain, thereby enhancing the overall user experience of your app.PreviousKuzuQAChainNextNebulaGraphQAChainPopulating the databaseRefresh graph schemaQuerying the databaseChain modifiersPrompt refinementCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,815
HugeGraph QA Chain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database. ->: HugeGraph QA Chain | 🦜️🔗 Langchain
1,816
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingHugeGraph QA ChainOn this pageHugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.You will need to have a running HugeGraph instance.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingHugeGraph QA ChainOn this pageHugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.You will need to have a running HugeGraph instance.
1,817
You can run a local docker container by running the executing the following script:docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraphIf we want to connect HugeGraph in the application, we need to install python sdk:pip3 install hugegraph-pythonIf you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database.from hugegraph.connection import PyHugeGraphclient = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph")First, we create the schema for a simple movie database:"""schema"""schema = client.schema()schema.propertyKey("name").asText().ifNotExist().create()schema.propertyKey("birthDate").asText().ifNotExist().create()schema.vertexLabel("Person").properties( "name", "birthDate").usePrimaryKeyId().primaryKeys("name").ifNotExist().create()schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name").ifNotExist().create()schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie").ifNotExist().create() 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"'Then we can insert some data."""graph"""g = client.graph()g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"})g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"})g.addVertex("Movie", {"name": "The Godfather"})g.addVertex("Movie", {"name": "The Godfather Part II"})g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database. ->: You can run a local docker container by running the executing the following script:docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraphIf we want to connect HugeGraph in the application, we need to install python sdk:pip3 install hugegraph-pythonIf you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database.from hugegraph.connection import PyHugeGraphclient = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph")First, we create the schema for a simple movie database:"""schema"""schema = client.schema()schema.propertyKey("name").asText().ifNotExist().create()schema.propertyKey("birthDate").asText().ifNotExist().create()schema.vertexLabel("Person").properties( "name", "birthDate").usePrimaryKeyId().primaryKeys("name").ifNotExist().create()schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name").ifNotExist().create()schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie").ifNotExist().create() 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"'Then we can insert some data."""graph"""g = client.graph()g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"})g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"})g.addVertex("Movie", {"name": "The Godfather"})g.addVertex("Movie", {"name": "The Godfather Part II"})g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The
1,818
{})g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {})g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) 1:Robert De Niro--ActedIn-->2:The Godfather Part IICreating HugeGraphQAChain​We can now create the HugeGraph and HugeGraphQAChain. To create the HugeGraph we simply need to pass the database object to the HugeGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.chains import HugeGraphQAChainfrom langchain.graphs import HugeGraphgraph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph",)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']] Edge properties: [name: ActedIn, properties: []] Relationships: ['Person--ActedIn-->Movie'] Querying the graph​We can now use the graph Gremlin QA chain to ask question of the graphchain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather?") > Entering new chain... Generated gremlin: g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true) Full Context: [{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}] > Finished chain. 'Al Pacino played in The Godfather.'PreviousFalkorDBQAChainNextKuzuQAChainCreating HugeGraphQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.
This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database. ->: {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {})g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) 1:Robert De Niro--ActedIn-->2:The Godfather Part IICreating HugeGraphQAChain​We can now create the HugeGraph and HugeGraphQAChain. To create the HugeGraph we simply need to pass the database object to the HugeGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.chains import HugeGraphQAChainfrom langchain.graphs import HugeGraphgraph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph",)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']] Edge properties: [name: ActedIn, properties: []] Relationships: ['Person--ActedIn-->Movie'] Querying the graph​We can now use the graph Gremlin QA chain to ask question of the graphchain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather?") > Entering new chain... Generated gremlin: g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true) Full Context: [{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}] > Finished chain. 'Al Pacino played in The Godfather.'PreviousFalkorDBQAChainNextKuzuQAChainCreating HugeGraphQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,819
Neo4j DB QA chain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: Neo4j DB QA chain | 🦜️🔗 Langchain
1,820
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNeo4j DB QA chainOn this pageNeo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingNeo4j DB QA chainOn this pageNeo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container.
1,821
You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom langchain.graphs import Neo4jGraphgraph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein") /home/tomaz/neo4j/langchain/libs/langchain/langchain/graphs/neo4j_graph.py:52: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Seeding the database‚ÄãAssuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times.graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") []Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.graph.refresh_schema()print(graph.schema) Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}] Relationship properties are the following: [] The relationships are the following: ['(:Actor)-[:ACTED_IN]->(:Movie)'] Querying the graph‚ÄãWe can now use the graph cypher QA chain to ask question of the graphchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in Top Gun?")
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom langchain.graphs import Neo4jGraphgraph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein") /home/tomaz/neo4j/langchain/libs/langchain/langchain/graphs/neo4j_graph.py:52: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Seeding the database‚ÄãAssuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times.graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") []Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.graph.refresh_schema()print(graph.schema) Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}] Relationship properties are the following: [] The relationships are the following: ['(:Actor)-[:ACTED_IN]->(:Movie)'] Querying the graph‚ÄãWe can now use the graph cypher QA chain to ask question of the graphchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in Top Gun?")
1,822
played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Limit the number of results‚ÄãYou can limit the number of results from the Cypher QA Chain using the top_k parameter.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Limit the number of results‚ÄãYou can limit the number of results from the Cypher QA Chain using the top_k parameter.
1,823
The default is 10.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}] > Finished chain. 'Tom Cruise and Val Kilmer played in Top Gun.'Return intermediate results‚ÄãYou can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}] Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.Return direct results‚ÄãYou can return direct results from the Cypher QA Chain using the return_direct parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name':
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: The default is 10.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}] > Finished chain. 'Tom Cruise and Val Kilmer played in Top Gun.'Return intermediate results‚ÄãYou can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}] Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.Return direct results‚ÄãYou can return direct results from the Cypher QA Chain using the return_direct parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name':
1,824
{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]Add examples in the Cypher generation prompt‚ÄãYou can define the Cypher statement you want the LLM to generate for particular questionsfrom langchain.prompts.prompt import PromptTemplateCYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.Examples: Here are a few examples of generated Cypher statements for particular questions:# How many people played in Top Gun?MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()RETURN count(*) AS numberOfActorsThe question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT)chain.run("How many people played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor) RETURN count(*) AS numberOfActors Full Context: [{'numberOfActors': 4}] > Finished chain. 'Four people played in Top Gun.'Use separate LLMs for Cypher and answer generation‚ÄãYou can use the cypher_llm and qa_llm parameters to define different llmschain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True,)chain.run("Who played in Top Gun?") > Entering new
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]Add examples in the Cypher generation prompt‚ÄãYou can define the Cypher statement you want the LLM to generate for particular questionsfrom langchain.prompts.prompt import PromptTemplateCYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.Examples: Here are a few examples of generated Cypher statements for particular questions:# How many people played in Top Gun?MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()RETURN count(*) AS numberOfActorsThe question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT)chain.run("How many people played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor) RETURN count(*) AS numberOfActors Full Context: [{'numberOfActors': 4}] > Finished chain. 'Four people played in Top Gun.'Use separate LLMs for Cypher and answer generation‚ÄãYou can use the cypher_llm and qa_llm parameters to define different llmschain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True,)chain.run("Who played in Top Gun?") > Entering new
1,825
played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Ignore specified node and relationship typesYou can use include_types or exclude_types to ignore parts of the graph schema when generating Cypher statements.chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=['Movie'])# Inspect graph schemaprint(chain.graph_schema) Node properties are the following: {'Actor': [{'property': 'name', 'type': 'STRING'}]} Relationships properties are the following: {} Relationships are: []Validate generated Cypher statementsYou can use the validate_cypher parameter to validate and correct relationship directions in generated Cypher statementschain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'PreviousArangoDB QA chainNextFalkorDBQAChainSeeding the databaseRefresh graph schema informationQuerying the graphLimit the number of resultsReturn intermediate resultsReturn direct resultsAdd examples in the Cypher generation promptUse separate LLMs for
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Ignore specified node and relationship typesYou can use include_types or exclude_types to ignore parts of the graph schema when generating Cypher statements.chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=['Movie'])# Inspect graph schemaprint(chain.graph_schema) Node properties are the following: {'Actor': [{'property': 'name', 'type': 'STRING'}]} Relationships properties are the following: {} Relationships are: []Validate generated Cypher statementsYou can use the validate_cypher parameter to validate and correct relationship directions in generated Cypher statementschain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'PreviousArangoDB QA chainNextFalkorDBQAChainSeeding the databaseRefresh graph schema informationQuerying the graphLimit the number of resultsReturn intermediate resultsReturn direct resultsAdd examples in the Cypher generation promptUse separate LLMs for
1,826
the Cypher generation promptUse separate LLMs for Cypher and answer generationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. ->: the Cypher generation promptUse separate LLMs for Cypher and answer generationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,827
ArangoDB QA chain | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: ArangoDB QA chain | 🦜️🔗 Langchain
1,828
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingArangoDB QA chainOn this pageArangoDB QA chainThis notebook shows how to use LLMs to provide a natural language interface to an ArangoDB database.You can get a local ArangoDB instance running via the ArangoDB Docker image: docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodbAn alternative is to use the ArangoDB Cloud Connector package to get a temporary cloud instance running:pip install python-arango # The ArangoDB Python Driverpip install adb-cloud-connector # The ArangoDB Cloud Instance provisionerpip install openaipip install langchain# Instantiate ArangoDB Databaseimport jsonfrom arango import ArangoClientfrom adb_cloud_connector import get_temp_credentialscon = get_temp_credentials()db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True)print(json.dumps(con, indent=2)) Log: requesting new credentials... Succcess: new credentials acquired { "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529" }# Instantiate the ArangoDB-LangChain Graphfrom langchain.graphs import ArangoGraphgraph = ArangoGraph(db)Populating the Database​We will rely on the Python Driver to import our GameOfThrones data into our database.if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones",
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingArangoDB QA chainOn this pageArangoDB QA chainThis notebook shows how to use LLMs to provide a natural language interface to an ArangoDB database.You can get a local ArangoDB instance running via the ArangoDB Docker image: docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodbAn alternative is to use the ArangoDB Cloud Connector package to get a temporary cloud instance running:pip install python-arango # The ArangoDB Python Driverpip install adb-cloud-connector # The ArangoDB Cloud Instance provisionerpip install openaipip install langchain# Instantiate ArangoDB Databaseimport jsonfrom arango import ArangoClientfrom adb_cloud_connector import get_temp_credentialscon = get_temp_credentials()db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True)print(json.dumps(con, indent=2)) Log: requesting new credentials... Succcess: new credentials acquired { "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529" }# Instantiate the ArangoDB-LangChain Graphfrom langchain.graphs import ArangoGraphgraph = ArangoGraph(db)Populating the Database​We will rely on the Python Driver to import our GameOfThrones data into our database.if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones",
1,829
db.delete_graph("GameOfThrones", drop_collections=True)db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ],)documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", },]edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"},]db.collection("Characters").import_bulk(documents)db.collection("ChildOf").import_bulk(edges) {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []}Getting & Setting the ArangoDB Schema‚ÄãAn initial ArangoDB Schema is generated upon instantiating the ArangoDBGraph object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:# The schema should be empty here,# since `graph` was initialized prior to ArangoDB Data ingestion (see above).import jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [], "Collection Schema": [] }graph.set_schema()# We can now view the generated schemaimport jsonprint(json.dumps(graph.schema,
Open In Colab
Open In Colab ->: db.delete_graph("GameOfThrones", drop_collections=True)db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ],)documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", },]edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"},]db.collection("Characters").import_bulk(documents)db.collection("ChildOf").import_bulk(edges) {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []}Getting & Setting the ArangoDB Schema‚ÄãAn initial ArangoDB Schema is generated upon instantiating the ArangoDBGraph object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:# The schema should be empty here,# since `graph` was initialized prior to ArangoDB Data ingestion (see above).import jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [], "Collection Schema": [] }graph.set_schema()# We can now view the generated schemaimport jsonprint(json.dumps(graph.schema,
1,830
schemaimport jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [ { "graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str"
Open In Colab
Open In Colab ->: schemaimport jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [ { "graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str"
1,831
"type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name", "type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ] }Querying the ArangoDB Database‚ÄãWe can now use the ArangoDB Graph QA Chain to inquire about our dataimport osos.environ["OPENAI_API_KEY"] = "your-key-here"from langchain.chat_models import ChatOpenAIfrom langchain.chains import ArangoGraphQAChainchain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Is Ned Stark alive?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Ned" AND character.surname == "Stark" RETURN character.alive AQL Result: [True] > Finished chain. 'Yes, Ned Stark is alive.'chain.run("How old is Arya Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN
Open In Colab
Open In Colab ->: "type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name", "type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ] }Querying the ArangoDB Database‚ÄãWe can now use the ArangoDB Graph QA Chain to inquire about our dataimport osos.environ["OPENAI_API_KEY"] = "your-key-here"from langchain.chat_models import ChatOpenAIfrom langchain.chains import ArangoGraphQAChainchain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Is Ned Stark alive?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Ned" AND character.surname == "Stark" RETURN character.alive AQL Result: [True] > Finished chain. 'Yes, Ned Stark is alive.'chain.run("How old is Arya Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN
1,832
Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Arya" && character.surname == "Stark" RETURN character.age AQL Result: [11] > Finished chain. 'Arya Stark is 11 years old.'chain.run("Are Arya Stark and Ned Stark related?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark' RETURN p AQL Result: [{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}] > Finished chain. 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.'chain.run("Does Arya Stark have a dead parent?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER v.alive == false RETURN e AQL Result: [{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}] > Finished chain. 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.'Chain Modifiers‚ÄãYou can alter the values of the following ArangoDBGraphQAChain class variables to modify the behaviour of your chain results# Specify the maximum number of AQL Query Results to returnchain.top_k = 10# Specify whether or not
Open In Colab
Open In Colab ->: Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Arya" && character.surname == "Stark" RETURN character.age AQL Result: [11] > Finished chain. 'Arya Stark is 11 years old.'chain.run("Are Arya Stark and Ned Stark related?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark' RETURN p AQL Result: [{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}] > Finished chain. 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.'chain.run("Does Arya Stark have a dead parent?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER v.alive == false RETURN e AQL Result: [{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}] > Finished chain. 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.'Chain Modifiers‚ÄãYou can alter the values of the following ArangoDBGraphQAChain class variables to modify the behaviour of your chain results# Specify the maximum number of AQL Query Results to returnchain.top_k = 10# Specify whether or not
1,833
to returnchain.top_k = 10# Specify whether or not to return the AQL Query in the output dictionarychain.return_aql_query = True# Specify whether or not to return the AQL JSON Result in the output dictionarychain.return_aql_result = True# Specify the maximum amount of AQL Generation attempts that should be madechain.max_aql_generation_attempts = 5# Specify a set of AQL Query Examples, which are passed to# the AQL Generation Prompt Template to promote few-shot-learning.# Defaults to an empty string.chain.aql_examples = """# Is Ned Stark alive?RETURN DOCUMENT('Characters/NedStark').alive# Is Arya Stark the child of Ned Stark?FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e"""chain.run("Is Ned Stark alive?")# chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result > Entering new ArangoGraphQAChain chain... AQL Query (1): RETURN DOCUMENT('Characters/NedStark').alive AQL Result: [True] > Finished chain. 'Yes, according to the information in the database, Ned Stark is alive.'chain.run("Is Bran Stark the child of Ned Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN e AQL Result: [{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}] > Finished chain. 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'PreviousDiffbot Graph TransformerNextNeo4j DB QA chainPopulating the DatabaseGetting & Setting the ArangoDB SchemaQuerying the ArangoDB DatabaseChain ModifiersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: to returnchain.top_k = 10# Specify whether or not to return the AQL Query in the output dictionarychain.return_aql_query = True# Specify whether or not to return the AQL JSON Result in the output dictionarychain.return_aql_result = True# Specify the maximum amount of AQL Generation attempts that should be madechain.max_aql_generation_attempts = 5# Specify a set of AQL Query Examples, which are passed to# the AQL Generation Prompt Template to promote few-shot-learning.# Defaults to an empty string.chain.aql_examples = """# Is Ned Stark alive?RETURN DOCUMENT('Characters/NedStark').alive# Is Arya Stark the child of Ned Stark?FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e"""chain.run("Is Ned Stark alive?")# chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result > Entering new ArangoGraphQAChain chain... AQL Query (1): RETURN DOCUMENT('Characters/NedStark').alive AQL Result: [True] > Finished chain. 'Yes, according to the information in the database, Ned Stark is alive.'chain.run("Is Bran Stark the child of Ned Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN e AQL Result: [{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}] > Finished chain. 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'PreviousDiffbot Graph TransformerNextNeo4j DB QA chainPopulating the DatabaseGetting & Setting the ArangoDB SchemaQuerying the ArangoDB DatabaseChain ModifiersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,834
Diffbot Graph Transformer | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: Diffbot Graph Transformer | 🦜️🔗 Langchain
1,835
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingDiffbot Graph TransformerOn this pageDiffbot Graph TransformerUse case​Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications.This combination allows for use cases such as:Building knowledge graphs from textual documents, websites, or social media feeds.Generating recommendations based on semantic relationships in the data.Creating advanced search features that understand the relationships between entities.Building analytics dashboards that allow users to explore the hidden relationships in data.Overview​LangChain provides tools to interact with Graph Databases:Construct knowledge graphs from text using graph transformer and store integrations Query a graph database using chains for query creation and executionInteract with a graph database using agents for robust and flexible querying Quickstart​First, get required packages and set environment variables:pip install langchain langchain-experimental openai neo4j wikipediaDiffbot NLP Service​Diffbot's NLP service is a tool for extracting entities,
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingDiffbot Graph TransformerOn this pageDiffbot Graph TransformerUse case​Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications.This combination allows for use cases such as:Building knowledge graphs from textual documents, websites, or social media feeds.Generating recommendations based on semantic relationships in the data.Creating advanced search features that understand the relationships between entities.Building analytics dashboards that allow users to explore the hidden relationships in data.Overview​LangChain provides tools to interact with Graph Databases:Construct knowledge graphs from text using graph transformer and store integrations Query a graph database using chains for query creation and executionInteract with a graph database using agents for robust and flexible querying Quickstart​First, get required packages and set environment variables:pip install langchain langchain-experimental openai neo4j wikipediaDiffbot NLP Service​Diffbot's NLP service is a tool for extracting entities,
1,836
NLP service is a tool for extracting entities, relationships, and semantic context from unstructured text data.
Open In Colab
Open In Colab ->: NLP service is a tool for extracting entities, relationships, and semantic context from unstructured text data.
1,837
This extracted information can be used to construct a knowledge graph. To use their service, you'll need to obtain an API key from Diffbot.from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformerdiffbot_api_key = "DIFFBOT_API_KEY"diffbot_nlp = DiffbotGraphTransformer(diffbot_api_key=diffbot_api_key)This code fetches Wikipedia articles about "Warren Buffett" and then uses DiffbotGraphTransformer to extract entities and relationships. The DiffbotGraphTransformer outputs a structured data GraphDocument, which can be used to populate a graph database.
Open In Colab
Open In Colab ->: This extracted information can be used to construct a knowledge graph. To use their service, you'll need to obtain an API key from Diffbot.from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformerdiffbot_api_key = "DIFFBOT_API_KEY"diffbot_nlp = DiffbotGraphTransformer(diffbot_api_key=diffbot_api_key)This code fetches Wikipedia articles about "Warren Buffett" and then uses DiffbotGraphTransformer to extract entities and relationships. The DiffbotGraphTransformer outputs a structured data GraphDocument, which can be used to populate a graph database.
1,838
Note that text chunking is avoided due to Diffbot's character limit per API request.from langchain.document_loaders import WikipediaLoaderquery = "Warren Buffett"raw_documents = WikipediaLoader(query=query).load()graph_documents = diffbot_nlp.convert_to_graph_documents(raw_documents)Loading the data into a knowledge graph‚ÄãYou will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.graphs import Neo4jGraphurl="bolt://localhost:7687"username="neo4j"password="pleaseletmein"graph = Neo4jGraph( url=url, username=username, password=password)The GraphDocuments can be loaded into a knowledge graph using the add_graph_documents method.graph.add_graph_documents(graph_documents)Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statementsgraph.refresh_schema()Querying the graph‚ÄãWe can now use the graph cypher QA chain to ask question of the graph. It is advisable to use gpt-4 to construct Cypher queries to get the best experience.from langchain.chains import GraphCypherQAChainfrom langchain.chat_models import ChatOpenAIchain = GraphCypherQAChain.from_llm( cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-4"), qa_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True, )chain.run("Which university did Warren Buffett attend?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH
Open In Colab
Open In Colab ->: Note that text chunking is avoided due to Diffbot's character limit per API request.from langchain.document_loaders import WikipediaLoaderquery = "Warren Buffett"raw_documents = WikipediaLoader(query=query).load()graph_documents = diffbot_nlp.convert_to_graph_documents(raw_documents)Loading the data into a knowledge graph‚ÄãYou will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.graphs import Neo4jGraphurl="bolt://localhost:7687"username="neo4j"password="pleaseletmein"graph = Neo4jGraph( url=url, username=username, password=password)The GraphDocuments can be loaded into a knowledge graph using the add_graph_documents method.graph.add_graph_documents(graph_documents)Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statementsgraph.refresh_schema()Querying the graph‚ÄãWe can now use the graph cypher QA chain to ask question of the graph. It is advisable to use gpt-4 to construct Cypher queries to get the best experience.from langchain.chains import GraphCypherQAChainfrom langchain.chat_models import ChatOpenAIchain = GraphCypherQAChain.from_llm( cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-4"), qa_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True, )chain.run("Which university did Warren Buffett attend?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH
1,839
chain... Generated Cypher: MATCH (p:Person {name: "Warren Buffett"})-[:EDUCATED_AT]->(o:Organization) RETURN o.name Full Context: [{'o.name': 'New York Institute of Finance'}, {'o.name': 'Alice Deal Junior High School'}, {'o.name': 'Woodrow Wilson High School'}, {'o.name': 'University of Nebraska'}] > Finished chain. 'Warren Buffett attended the University of Nebraska.'chain.run("Who is or was working at Berkshire Hathaway?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person)-[r:EMPLOYEE_OR_MEMBER_OF]->(o:Organization) WHERE o.name = 'Berkshire Hathaway' RETURN p.name Full Context: [{'p.name': 'Charlie Munger'}, {'p.name': 'Oliver Chace'}, {'p.name': 'Howard Buffett'}, {'p.name': 'Howard'}, {'p.name': 'Susan Buffett'}, {'p.name': 'Warren Buffett'}] > Finished chain. 'Charlie Munger, Oliver Chace, Howard Buffett, Susan Buffett, and Warren Buffett are or were working at Berkshire Hathaway.'PreviousGraph queryingNextArangoDB QA chainUse caseOverviewQuickstartDiffbot NLP ServiceLoading the data into a knowledge graphRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: chain... Generated Cypher: MATCH (p:Person {name: "Warren Buffett"})-[:EDUCATED_AT]->(o:Organization) RETURN o.name Full Context: [{'o.name': 'New York Institute of Finance'}, {'o.name': 'Alice Deal Junior High School'}, {'o.name': 'Woodrow Wilson High School'}, {'o.name': 'University of Nebraska'}] > Finished chain. 'Warren Buffett attended the University of Nebraska.'chain.run("Who is or was working at Berkshire Hathaway?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person)-[r:EMPLOYEE_OR_MEMBER_OF]->(o:Organization) WHERE o.name = 'Berkshire Hathaway' RETURN p.name Full Context: [{'p.name': 'Charlie Munger'}, {'p.name': 'Oliver Chace'}, {'p.name': 'Howard Buffett'}, {'p.name': 'Howard'}, {'p.name': 'Susan Buffett'}, {'p.name': 'Warren Buffett'}] > Finished chain. 'Charlie Munger, Oliver Chace, Howard Buffett, Susan Buffett, and Warren Buffett are or were working at Berkshire Hathaway.'PreviousGraph queryingNextArangoDB QA chainUse caseOverviewQuickstartDiffbot NLP ServiceLoading the data into a knowledge graphRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,840
GraphSparqlQAChain | 🦜️🔗 Langchain
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: GraphSparqlQAChain | 🦜️🔗 Langchain
1,841
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingGraphSparqlQAChainOn this pageGraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingGraphSparqlQAChainOn this pageGraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
1,842
Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph.There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., Wikidata, and triple stores.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphSparqlQAChainfrom langchain.graphs import RdfGraphgraph = RdfGraph( source_file="http://www.w3.org/People/Berners-Lee/card", standard="rdf", local_copy="test.ttl",)Note that providing a local_file is necessary for storing changes locally if the source is read-only.Refresh graph schema information‚ÄãIf the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries.graph.load_schema()graph.get_schema In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types: <http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None) The RDF graph supports the following relationships: <http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None),
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph.There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., Wikidata, and triple stores.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphSparqlQAChainfrom langchain.graphs import RdfGraphgraph = RdfGraph( source_file="http://www.w3.org/People/Berners-Lee/card", standard="rdf", local_copy="test.ttl",)Note that providing a local_file is necessary for storing changes locally if the source is read-only.Refresh graph schema information‚ÄãIf the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries.graph.load_schema()graph.get_schema In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types: <http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None) The RDF graph supports the following relationships: <http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None),
1,843
(country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None),
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None),
1,844
(office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) Querying the graph‚ÄãNow, you can use the graph SPARQL QA chain to ask questions about the graph.chain = GraphSparqlQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("What is Tim Berners-Lee's work homepage?") > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?homepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage . } Full Context: [] > Finished chain. "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."Updating the graph‚ÄãAnalogously, you can update the graph, i.e., insert triples, using natural language.chain.run( "Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'") > Entering new GraphSparqlQAChain chain... Identified
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) Querying the graph‚ÄãNow, you can use the graph SPARQL QA chain to ask questions about the graph.chain = GraphSparqlQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("What is Tim Berners-Lee's work homepage?") > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?homepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage . } Full Context: [] > Finished chain. "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."Updating the graph‚ÄãAnalogously, you can update the graph, i.e., insert triples, using natural language.chain.run( "Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'") > Entering new GraphSparqlQAChain chain... Identified
1,845
new GraphSparqlQAChain chain... Identified intent: UPDATE Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> . } WHERE { ?person foaf:name "Timothy Berners-Lee" . } > Finished chain. 'Successfully inserted triples into the graph.'Let's verify the results:query = ( """PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n""" """SELECT ?hp\n""" """WHERE {\n""" """ ?person foaf:name "Timothy Berners-Lee" . \n""" """ ?person foaf:workplaceHomepage ?hp .\n""" """}""")graph.query(query) [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)]PreviousNetworkX Graph QANextNeptune Open Cypher QA ChainRefresh graph schema informationQuerying the graphUpdating the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\ ->: new GraphSparqlQAChain chain... Identified intent: UPDATE Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> . } WHERE { ?person foaf:name "Timothy Berners-Lee" . } > Finished chain. 'Successfully inserted triples into the graph.'Let's verify the results:query = ( """PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n""" """SELECT ?hp\n""" """WHERE {\n""" """ ?person foaf:name "Timothy Berners-Lee" . \n""" """ ?person foaf:workplaceHomepage ?hp .\n""" """}""")graph.query(query) [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)]PreviousNetworkX Graph QANextNeptune Open Cypher QA ChainRefresh graph schema informationQuerying the graphUpdating the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,846
KuzuQAChain | 🦜️🔗 Langchain
This notebook shows how to use LLMs to provide a natural language interface to K√πzu database.
This notebook shows how to use LLMs to provide a natural language interface to Kùzu database. ->: KuzuQAChain | 🦜️🔗 Langchain
1,847
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingKuzuQAChainOn this pageKuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.Kùzu is an in-process property graph database management system. You can simply install it with pip:pip install kuzuOnce installed, you can simply import it and start creating a database on the local machine and connect to it:import kuzudb = kuzu.Database("test_db")conn = kuzu.Connection(db)First, we create the schema for a simple movie database:conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))")conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") <kuzu.query_result.QueryResult at 0x1066ff410>Then we can insert some data.conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})")conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})")conn.execute("CREATE (:Movie {name: 'The Godfather'})")conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})")conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name =
This notebook shows how to use LLMs to provide a natural language interface to K√πzu database.
This notebook shows how to use LLMs to provide a natural language interface to Kùzu database. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingDiffbot Graph TransformerArangoDB QA chainNeo4j DB QA chainFalkorDBQAChainHugeGraph QA ChainKuzuQAChainMemgraph QA chainNebulaGraphQAChainNetworkX Graph QAGraphSparqlQAChainNeptune Open Cypher QA ChainGraph queryingKuzuQAChainOn this pageKuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.Kùzu is an in-process property graph database management system. You can simply install it with pip:pip install kuzuOnce installed, you can simply import it and start creating a database on the local machine and connect to it:import kuzudb = kuzu.Database("test_db")conn = kuzu.Connection(db)First, we create the schema for a simple movie database:conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))")conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") <kuzu.query_result.QueryResult at 0x1066ff410>Then we can insert some data.conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})")conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})")conn.execute("CREATE (:Movie {name: 'The Godfather'})")conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})")conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name =
1,848
"MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)") <kuzu.query_result.QueryResult at 0x107016210>Creating KuzuQAChain‚ÄãWe can now create the KuzuGraph and KuzuQAChain. To create the KuzuGraph we simply need to pass the database object to the KuzuGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import KuzuGraphfrom langchain.chains import KuzuQAChaingraph = KuzuGraph(db)chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}] Relationships properties: [{'properties': [], 'label': 'ActedIn'}] Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] Querying the graph‚ÄãWe can now use the KuzuQAChain to ask question of the graphchain.run("Who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name Full Context: [{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}] > Finished chain. 'Al Pacino and Robert De Niro both played in The Godfather: Part II.'chain.run("Robert De Niro played in which movies?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN m.name Full Context: [{'m.name': 'The Godfather: Part II'}] > Finished chain. 'Robert De Niro played in The Godfather: Part
This notebook shows how to use LLMs to provide a natural language interface to K√πzu database.
This notebook shows how to use LLMs to provide a natural language interface to K√πzu database. ->: "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)") <kuzu.query_result.QueryResult at 0x107016210>Creating KuzuQAChain‚ÄãWe can now create the KuzuGraph and KuzuQAChain. To create the KuzuGraph we simply need to pass the database object to the KuzuGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import KuzuGraphfrom langchain.chains import KuzuQAChaingraph = KuzuGraph(db)chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)Refresh graph schema information‚ÄãIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}] Relationships properties: [{'properties': [], 'label': 'ActedIn'}] Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] Querying the graph‚ÄãWe can now use the KuzuQAChain to ask question of the graphchain.run("Who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name Full Context: [{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}] > Finished chain. 'Al Pacino and Robert De Niro both played in The Godfather: Part II.'chain.run("Robert De Niro played in which movies?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN m.name Full Context: [{'m.name': 'The Godfather: Part II'}] > Finished chain. 'Robert De Niro played in The Godfather: Part
1,849
'Robert De Niro played in The Godfather: Part II.'chain.run("Robert De Niro is born in which year?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN p.birthDate Full Context: [{'p.birthDate': '1943-08-17'}] > Finished chain. 'Robert De Niro was born on August 17, 1943.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'}) WITH p, m, p.birthDate AS birthDate ORDER BY birthDate ASC LIMIT 1 RETURN p.name Full Context: [{'p.name': 'Al Pacino'}] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'PreviousHugeGraph QA ChainNextMemgraph QA chainCreating KuzuQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use LLMs to provide a natural language interface to K√πzu database.
This notebook shows how to use LLMs to provide a natural language interface to Kùzu database. ->: 'Robert De Niro played in The Godfather: Part II.'chain.run("Robert De Niro is born in which year?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN p.birthDate Full Context: [{'p.birthDate': '1943-08-17'}] > Finished chain. 'Robert De Niro was born on August 17, 1943.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'}) WITH p, m, p.birthDate AS birthDate ORDER BY birthDate ASC LIMIT 1 RETURN p.name Full Context: [{'p.name': 'Al Pacino'}] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'PreviousHugeGraph QA ChainNextMemgraph QA chainCreating KuzuQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,850
Extraction | 🦜️🔗 Langchain
Open In Collab
Open In Collab ->: Extraction | 🦜️🔗 Langchain
1,851
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingExtractionOn this pageExtractionUse case​Getting structured output from raw LLM generations is hard.For example, suppose you need the model output formatted with a specific schema for:Extracting a structured row to insert into a database Extracting API parametersExtracting different parts of a user query (e.g., for semantic vs keyword search)Overview​There are two primary approaches for this:Functions: Some LLMs can call functions to extract arbitrary entities from LLM responses.Parsing: Output parsers are classes that structure LLM responses. Only some LLMs support functions (e.g., OpenAI), and they are more general than parsers. Parsers extract precisely what is enumerated in a provided schema (e.g., specific attributes of a person).Functions can infer things beyond of a provided schema (e.g., attributes about a person that you did not ask for).Quickstart​OpenAI functions are one way to get started with extraction.Define a schema that specifies the properties we want to extract from the LLM output.Then, we can use create_extraction_chain to extract our desired schema using an OpenAI function call.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.chains import create_extraction_chain# Schemaschema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}# Input inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Run chainllm =
Open In Collab
Open In Collab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingExtractionOn this pageExtractionUse case​Getting structured output from raw LLM generations is hard.For example, suppose you need the model output formatted with a specific schema for:Extracting a structured row to insert into a database Extracting API parametersExtracting different parts of a user query (e.g., for semantic vs keyword search)Overview​There are two primary approaches for this:Functions: Some LLMs can call functions to extract arbitrary entities from LLM responses.Parsing: Output parsers are classes that structure LLM responses. Only some LLMs support functions (e.g., OpenAI), and they are more general than parsers. Parsers extract precisely what is enumerated in a provided schema (e.g., specific attributes of a person).Functions can infer things beyond of a provided schema (e.g., attributes about a person that you did not ask for).Quickstart​OpenAI functions are one way to get started with extraction.Define a schema that specifies the properties we want to extract from the LLM output.Then, we can use create_extraction_chain to extract our desired schema using an OpenAI function call.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.chains import create_extraction_chain# Schemaschema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}# Input inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Run chainllm =
1,852
a brunette and Alex is blonde."""# Run chainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")chain = create_extraction_chain(schema, llm)chain.run(inp) [{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Option 1: OpenAI functions‚ÄãLooking under the hood‚ÄãLet's dig into what is happening when we call create_extraction_chain.The LangSmith trace shows that we call the function information_extraction on the input string, inp.This information_extraction function is defined here and returns a dict.We can see the dict in the model output: { "info": [ { "name": "Alex", "height": 5, "hair_color": "blonde" }, { "name": "Claudia", "height": 6, "hair_color": "brunette" } ] }The create_extraction_chain then parses the raw LLM output for us using JsonKeyOutputFunctionsParser.This results in the list of JSON objects returned by the chain above:[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Multiple entity types‚ÄãWe can extend this further.Let's say we want to differentiate between dogs and people.We can add person_ and dog_ prefixes for each propertyschema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": ["person_name", "person_height"],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde', 'dog_name': 'Frosty', 'dog_breed': 'labrador'},
Open In Collab
Open In Collab ->: a brunette and Alex is blonde."""# Run chainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")chain = create_extraction_chain(schema, llm)chain.run(inp) [{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Option 1: OpenAI functions‚ÄãLooking under the hood‚ÄãLet's dig into what is happening when we call create_extraction_chain.The LangSmith trace shows that we call the function information_extraction on the input string, inp.This information_extraction function is defined here and returns a dict.We can see the dict in the model output: { "info": [ { "name": "Alex", "height": 5, "hair_color": "blonde" }, { "name": "Claudia", "height": 6, "hair_color": "brunette" } ] }The create_extraction_chain then parses the raw LLM output for us using JsonKeyOutputFunctionsParser.This results in the list of JSON objects returned by the chain above:[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Multiple entity types‚ÄãWe can extend this further.Let's say we want to differentiate between dogs and people.We can add person_ and dog_ prefixes for each propertyschema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": ["person_name", "person_height"],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde', 'dog_name': 'Frosty', 'dog_breed': 'labrador'},
1,853
'Frosty', 'dog_breed': 'labrador'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}]Unrelated entities‚ÄãIf we use required: [], we allow the model to return only person attributes or only dog attributes for a single entity (person or dog).schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": [],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'}, {'dog_name': 'Milo', 'dog_breed': 'border collie'}]Extra information‚ÄãThe power of functions (relative to using parsers alone) lies in the ability to perform semantic extraction.In particular, we can ask for things that are not explicitly enumerated in the schema.Suppose we want unspecified additional information about dogs. We can use add a placeholder for unstructured extraction, dog_extra_info.schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, "dog_extra_info": {"type": "string"}, },}chain = create_extraction_chain(schema, llm)chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia',
Open In Collab
Open In Collab ->: 'Frosty', 'dog_breed': 'labrador'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}]Unrelated entities‚ÄãIf we use required: [], we allow the model to return only person attributes or only dog attributes for a single entity (person or dog).schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": [],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'}, {'dog_name': 'Milo', 'dog_breed': 'border collie'}]Extra information‚ÄãThe power of functions (relative to using parsers alone) lies in the ability to perform semantic extraction.In particular, we can ask for things that are not explicitly enumerated in the schema.Suppose we want unspecified additional information about dogs. We can use add a placeholder for unstructured extraction, dog_extra_info.schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, "dog_extra_info": {"type": "string"}, },}chain = create_extraction_chain(schema, llm)chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia',
1,854
'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd', 'dog_extra_info': 'likes to play with other dogs'}, {'dog_name': 'Milo', 'dog_breed': 'border collie', 'dog_extra_info': 'lives close by'}]This gives us additional information about the dogs.Pydantic‚ÄãPydantic is a data validation and settings management library for Python. It allows you to create data classes with attributes that are automatically validated when you instantiate an object.Lets define a class with attributes annotated with types.from typing import Optionalfrom langchain.pydantic_v1 import BaseModelfrom langchain.chains import create_extraction_chain_pydantic# Pydantic data classclass Properties(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str] # Extractionchain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)# Run inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""chain.run(inp) [Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]As we can see from the trace, we use the function information_extraction, as above, with the Pydantic schema. Option 2: Parsing‚ÄãOutput parsers are classes that help structure language model responses. As shown above, they are used to parse the output of the OpenAI function calls in create_extraction_chain.But, they can be used independent of functions.Pydantic‚ÄãJust as a above, let's parse a generation based on a Pydantic data class.from typing import Sequence, Optionalfrom langchain.prompts import ( PromptTemplate, ChatPromptTemplate,
Open In Collab
Open In Collab ->: 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd', 'dog_extra_info': 'likes to play with other dogs'}, {'dog_name': 'Milo', 'dog_breed': 'border collie', 'dog_extra_info': 'lives close by'}]This gives us additional information about the dogs.Pydantic‚ÄãPydantic is a data validation and settings management library for Python. It allows you to create data classes with attributes that are automatically validated when you instantiate an object.Lets define a class with attributes annotated with types.from typing import Optionalfrom langchain.pydantic_v1 import BaseModelfrom langchain.chains import create_extraction_chain_pydantic# Pydantic data classclass Properties(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str] # Extractionchain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)# Run inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""chain.run(inp) [Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]As we can see from the trace, we use the function information_extraction, as above, with the Pydantic schema. Option 2: Parsing‚ÄãOutput parsers are classes that help structure language model responses. As shown above, they are used to parse the output of the OpenAI function calls in create_extraction_chain.But, they can be used independent of functions.Pydantic‚ÄãJust as a above, let's parse a generation based on a Pydantic data class.from typing import Sequence, Optionalfrom langchain.prompts import ( PromptTemplate, ChatPromptTemplate,
1,855
( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParserclass Person(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str]class People(BaseModel): """Identifying information about all people in a text.""" people: Sequence[Person] # Run query = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=People)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) People(people=[Person(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Person(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)])We can see from the LangSmith trace that we get the same output as above.We can see that we provide a two-shot prompt in order to instruct the LLM to output in our desired format.And, we need to do a bit more work:Define a class that holds multiple instances of PersonExplicitly parse the output of the LLM to the Pydantic classWe can see this for other cases, too.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParser# Define your desired data structure.class Joke(BaseModel): setup: str =
Open In Collab
Open In Collab ->: ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParserclass Person(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str]class People(BaseModel): """Identifying information about all people in a text.""" people: Sequence[Person] # Run query = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=People)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) People(people=[Person(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Person(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)])We can see from the LangSmith trace that we get the same output as above.We can see that we provide a two-shot prompt in order to instruct the LLM to output in our desired format.And, we need to do a bit more work:Define a class that holds multiple instances of PersonExplicitly parse the output of the LLM to the Pydantic classWe can see this for other cases, too.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParser# Define your desired data structure.class Joke(BaseModel): setup: str =
1,856
structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intended to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=joke_query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')As we can see, we get an output of the Joke class, which respects our originally desired schema: 'setup' and 'punchline'.We can look at the LangSmith trace to see exactly what is going on under the hood.Going deeper​The output parser documentation includes various parser examples for specific types (e.g., lists, datetime, enum, etc). JSONFormer offers another way for structured decoding of a subset of the JSON Schema.Kor is another library for extraction where schema and examples can be provided to the LLM.PreviousChatbotsNextSummarizationUse caseOverviewQuickstartOption 1: OpenAI functionsLooking under the hoodMultiple entity typesUnrelated entitiesExtra informationPydanticOption 2: ParsingPydanticGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Collab
Open In Collab ->: structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intended to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=joke_query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')As we can see, we get an output of the Joke class, which respects our originally desired schema: 'setup' and 'punchline'.We can look at the LangSmith trace to see exactly what is going on under the hood.Going deeper​The output parser documentation includes various parser examples for specific types (e.g., lists, datetime, enum, etc). JSONFormer offers another way for structured decoding of a subset of the JSON Schema.Kor is another library for extraction where schema and examples can be provided to the LLM.PreviousChatbotsNextSummarizationUse caseOverviewQuickstartOption 1: OpenAI functionsLooking under the hoodMultiple entity typesUnrelated entitiesExtra informationPydanticOption 2: ParsingPydanticGoing deeperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,857
Summarization | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: Summarization | 🦜️🔗 Langchain
1,858
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingSummarizationOn this pageSummarizationUse case​Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. LLMs are a great tool for this given their proficiency in understanding and synthesizing text.In this walkthrough we'll go over how to perform document summarization using LLMs.Overview​A central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:Stuff: Simply "stuff" all your documents into a single prompt. This is the simplest approach (see here for more on the StuffDocumentsChains, which is used for this method).Map-reduce: Summarize each document on it's own in a "map" step and then "reduce" the summaries into a final summary (see here for more on the MapReduceDocumentsChain, which is used for this method).Quickstart​To give you a sneak preview, either pipeline can be wrapped in a single object: load_summarize_chain. Suppose we want to summarize a blog post. We can create this in a few lines of code.First set environment variables and install packages:pip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv() Requirement already satisfied: openai in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.27.8) Requirement already satisfied: tiktoken in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.0) Requirement already satisfied: chromadb in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.4) Requirement already satisfied: langchain in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)Interacting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingSummarizationOn this pageSummarizationUse case​Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. LLMs are a great tool for this given their proficiency in understanding and synthesizing text.In this walkthrough we'll go over how to perform document summarization using LLMs.Overview​A central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:Stuff: Simply "stuff" all your documents into a single prompt. This is the simplest approach (see here for more on the StuffDocumentsChains, which is used for this method).Map-reduce: Summarize each document on it's own in a "map" step and then "reduce" the summaries into a final summary (see here for more on the MapReduceDocumentsChain, which is used for this method).Quickstart​To give you a sneak preview, either pipeline can be wrapped in a single object: load_summarize_chain. Suppose we want to summarize a blog post. We can create this in a few lines of code.First set environment variables and install packages:pip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv() Requirement already satisfied: openai in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.27.8) Requirement already satisfied: tiktoken in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.0) Requirement already satisfied: chromadb in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.4) Requirement already satisfied: langchain in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages
1,859
(0.0.299) Requirement already satisfied: requests>=2.20 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (2.31.0) Requirement already satisfied: tqdm in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (4.64.1) Requirement already satisfied: aiohttp in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.5) Requirement already satisfied: regex>=2022.1.18 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.6.3) Requirement already satisfied: pydantic<2.0,>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.10.12) Requirement already satisfied: chroma-hnswlib==0.7.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.7.2) Requirement already satisfied: fastapi<0.100.0,>=0.95.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.99.1) Requirement already satisfied: uvicorn[standard]>=0.18.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.23.2) Requirement already satisfied: numpy>=1.21.6 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.24.4) Requirement already satisfied: posthog>=2.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.0.1) Requirement already satisfied: typing-extensions>=4.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (4.7.1) Requirement already satisfied: pulsar-client>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.2.0) Requirement already satisfied: onnxruntime>=1.14.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.15.1) Requirement already satisfied: tokenizers>=0.13.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.13.3) Requirement already satisfied: pypika>=0.48.9 in
Open In Colab
Open In Colab ->: (0.0.299) Requirement already satisfied: requests>=2.20 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (2.31.0) Requirement already satisfied: tqdm in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (4.64.1) Requirement already satisfied: aiohttp in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.5) Requirement already satisfied: regex>=2022.1.18 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.6.3) Requirement already satisfied: pydantic<2.0,>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.10.12) Requirement already satisfied: chroma-hnswlib==0.7.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.7.2) Requirement already satisfied: fastapi<0.100.0,>=0.95.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.99.1) Requirement already satisfied: uvicorn[standard]>=0.18.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.23.2) Requirement already satisfied: numpy>=1.21.6 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.24.4) Requirement already satisfied: posthog>=2.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.0.1) Requirement already satisfied: typing-extensions>=4.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (4.7.1) Requirement already satisfied: pulsar-client>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.2.0) Requirement already satisfied: onnxruntime>=1.14.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.15.1) Requirement already satisfied: tokenizers>=0.13.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.13.3) Requirement already satisfied: pypika>=0.48.9 in
1,860
Requirement already satisfied: pypika>=0.48.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.48.9) Collecting tqdm (from openai) Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 2.7 MB/s eta 0:00:00 Requirement already satisfied: overrides>=7.3.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (7.4.0) Requirement already satisfied: importlib-resources in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (6.0.0) Requirement already satisfied: PyYAML>=5.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (6.0.1) Requirement already satisfied: SQLAlchemy<3,>=1.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (2.0.20) Requirement already satisfied: anyio<4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (3.7.1) Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (4.0.3) Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.5.9) Requirement already satisfied: jsonpatch<2.0,>=1.33 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (1.33) Requirement already satisfied: langsmith<0.1.0,>=0.0.38 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.0.42) Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain)
Open In Colab
Open In Colab ->: Requirement already satisfied: pypika>=0.48.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.48.9) Collecting tqdm (from openai) Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 2.7 MB/s eta 0:00:00 Requirement already satisfied: overrides>=7.3.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (7.4.0) Requirement already satisfied: importlib-resources in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (6.0.0) Requirement already satisfied: PyYAML>=5.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (6.0.1) Requirement already satisfied: SQLAlchemy<3,>=1.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (2.0.20) Requirement already satisfied: anyio<4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (3.7.1) Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (4.0.3) Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.5.9) Requirement already satisfied: jsonpatch<2.0,>=1.33 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (1.33) Requirement already satisfied: langsmith<0.1.0,>=0.0.38 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.0.42) Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain)
1,861
(from langchain) (2.8.5) Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (8.2.3) Requirement already satisfied: attrs>=17.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0) Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (3.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4) Requirement already satisfied: yarl<2.0,>=1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2) Requirement already satisfied: frozenlist>=1.1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.4.0) Requirement already satisfied: aiosignal>=1.1.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1) Requirement already satisfied: idna>=2.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (3.4) Requirement already satisfied: sniffio>=1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.3.0) Requirement already satisfied: exceptiongroup in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.1.3) Requirement already satisfied: marshmallow<4.0.0,>=3.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.1) Requirement already satisfied: marshmallow-enum<2.0.0,>=1.5.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (1.5.1) Requirement already satisfied: typing-inspect>=0.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from
Open In Colab
Open In Colab ->: (from langchain) (2.8.5) Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (8.2.3) Requirement already satisfied: attrs>=17.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0) Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (3.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4) Requirement already satisfied: yarl<2.0,>=1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2) Requirement already satisfied: frozenlist>=1.1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.4.0) Requirement already satisfied: aiosignal>=1.1.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1) Requirement already satisfied: idna>=2.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (3.4) Requirement already satisfied: sniffio>=1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.3.0) Requirement already satisfied: exceptiongroup in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.1.3) Requirement already satisfied: marshmallow<4.0.0,>=3.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.1) Requirement already satisfied: marshmallow-enum<2.0.0,>=1.5.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (1.5.1) Requirement already satisfied: typing-inspect>=0.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from
1,862
(from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0) Requirement already satisfied: starlette<0.28.0,>=0.27.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from fastapi<0.100.0,>=0.95.2->chromadb) (0.27.0) Requirement already satisfied: jsonpointer>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain) (2.4) Requirement already satisfied: coloredlogs in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1) Requirement already satisfied: flatbuffers in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.5.26) Requirement already satisfied: packaging in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.1) Requirement already satisfied: protobuf in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (4.23.4) Requirement already satisfied: sympy in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (1.12) Requirement already satisfied: six>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.16.0) Requirement already satisfied: monotonic>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.6) Requirement already satisfied: backoff>=1.10.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.2.1) Requirement already satisfied: python-dateutil>2.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.8.2) Requirement already satisfied: certifi in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from pulsar-client>=3.1.0->chromadb) (2023.7.22) Requirement already satisfied: urllib3<3,>=1.21.1 in
Open In Colab
Open In Colab ->: (from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0) Requirement already satisfied: starlette<0.28.0,>=0.27.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from fastapi<0.100.0,>=0.95.2->chromadb) (0.27.0) Requirement already satisfied: jsonpointer>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain) (2.4) Requirement already satisfied: coloredlogs in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1) Requirement already satisfied: flatbuffers in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.5.26) Requirement already satisfied: packaging in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.1) Requirement already satisfied: protobuf in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (4.23.4) Requirement already satisfied: sympy in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (1.12) Requirement already satisfied: six>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.16.0) Requirement already satisfied: monotonic>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.6) Requirement already satisfied: backoff>=1.10.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.2.1) Requirement already satisfied: python-dateutil>2.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.8.2) Requirement already satisfied: certifi in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from pulsar-client>=3.1.0->chromadb) (2023.7.22) Requirement already satisfied: urllib3<3,>=1.21.1 in
1,863
already satisfied: urllib3<3,>=1.21.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.16) Requirement already satisfied: click>=7.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (8.1.7) Requirement already satisfied: h11>=0.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.14.0) Requirement already satisfied: httptools>=0.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.6.0) Requirement already satisfied: python-dotenv>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0) Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.17.0) Requirement already satisfied: watchfiles>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0) Requirement already satisfied: websockets>=10.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (11.0.3) Requirement already satisfied: zipp>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from importlib-resources->chromadb) (3.16.2) Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from typing-inspect>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0) Requirement already satisfied: humanfriendly>=9.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0) Requirement already satisfied: mpmath>=0.19 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0) Using cached
Open In Colab
Open In Colab ->: already satisfied: urllib3<3,>=1.21.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.16) Requirement already satisfied: click>=7.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (8.1.7) Requirement already satisfied: h11>=0.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.14.0) Requirement already satisfied: httptools>=0.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.6.0) Requirement already satisfied: python-dotenv>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0) Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.17.0) Requirement already satisfied: watchfiles>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0) Requirement already satisfied: websockets>=10.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (11.0.3) Requirement already satisfied: zipp>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from importlib-resources->chromadb) (3.16.2) Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from typing-inspect>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0) Requirement already satisfied: humanfriendly>=9.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0) Requirement already satisfied: mpmath>=0.19 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0) Using cached
1,864
(1.3.0) Using cached tqdm-4.66.1-py3-none-any.whl (78 kB) Installing collected packages: tqdm Attempting uninstall: tqdm Found existing installation: tqdm 4.64.1 Uninstalling tqdm-4.64.1: Successfully uninstalled tqdm-4.64.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. clarifai 9.8.1 requires tqdm==4.64.1, but you have tqdm 4.66.1 which is incompatible. Successfully installed tqdm-4.66.1We can use chain_type="stuff", especially if using larger context window models such as:16k token OpenAI gpt-3.5-turbo-16k 100k token Anthropic Claude-2We can also supply chain_type="map_reduce" or chain_type="refine" (read more here).from langchain.chat_models import ChatOpenAIfrom langchain.document_loaders import WebBaseLoaderfrom langchain.chains.summarize import load_summarize_chainloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")docs = loader.load()llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")chain = load_summarize_chain(llm, chain_type="stuff")chain.run(docs) 'The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains. It also highlights the challenges and limitations of using LLMs in agent systems.'Option 1. Stuff‚ÄãWhen we use load_summarize_chain with chain_type="stuff", we will use the StuffDocumentsChain.The chain will take a list of documents, inserts them all into a prompt, and passes that prompt to an LLM:from langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains.combine_documents.stuff import StuffDocumentsChain# Define promptprompt_template = """Write a concise summary
Open In Colab
Open In Colab ->: (1.3.0) Using cached tqdm-4.66.1-py3-none-any.whl (78 kB) Installing collected packages: tqdm Attempting uninstall: tqdm Found existing installation: tqdm 4.64.1 Uninstalling tqdm-4.64.1: Successfully uninstalled tqdm-4.64.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. clarifai 9.8.1 requires tqdm==4.64.1, but you have tqdm 4.66.1 which is incompatible. Successfully installed tqdm-4.66.1We can use chain_type="stuff", especially if using larger context window models such as:16k token OpenAI gpt-3.5-turbo-16k 100k token Anthropic Claude-2We can also supply chain_type="map_reduce" or chain_type="refine" (read more here).from langchain.chat_models import ChatOpenAIfrom langchain.document_loaders import WebBaseLoaderfrom langchain.chains.summarize import load_summarize_chainloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")docs = loader.load()llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")chain = load_summarize_chain(llm, chain_type="stuff")chain.run(docs) 'The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains. It also highlights the challenges and limitations of using LLMs in agent systems.'Option 1. Stuff‚ÄãWhen we use load_summarize_chain with chain_type="stuff", we will use the StuffDocumentsChain.The chain will take a list of documents, inserts them all into a prompt, and passes that prompt to an LLM:from langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains.combine_documents.stuff import StuffDocumentsChain# Define promptprompt_template = """Write a concise summary
1,865
= """Write a concise summary of the following:"{text}"CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)# Define LLM chainllm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")llm_chain = LLMChain(llm=llm, prompt=prompt)# Define StuffDocumentsChainstuff_chain = StuffDocumentsChain( llm_chain=llm_chain, document_variable_name="text")docs = loader.load()print(stuff_chain.run(docs)) The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains, such as scientific discovery and generative agents simulation. It also highlights the challenges and limitations of using LLMs in agent systems.Great! We can see that we reproduce the earlier result using the load_summarize_chain.Go deeper‚ÄãYou can easily customize the prompt. You can easily try different LLMs, (e.g., Claude) via the llm parameter.Option 2. Map-Reduce‚ÄãLet's unpack the map reduce approach. For this, we'll first map each document to an individual summary using an LLMChain. Then we'll use a ReduceDocumentsChain to combine those summaries into a single global summary.First, we specfy the LLMChain to use for mapping each document to an individual summary:from langchain.chains.mapreduce import MapReduceChainfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChainllm = ChatOpenAI(temperature=0)# Mapmap_template = """The following is a set of documents{docs}Based on this list of docs, please identify the main themes Helpful Answer:"""map_prompt = PromptTemplate.from_template(map_template)map_chain = LLMChain(llm=llm, prompt=map_prompt)We can also use the Prompt Hub to store and fetch prompts.This will work with your LangSmith API key.For example, see the map prompt here.from
Open In Colab
Open In Colab ->: = """Write a concise summary of the following:"{text}"CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)# Define LLM chainllm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")llm_chain = LLMChain(llm=llm, prompt=prompt)# Define StuffDocumentsChainstuff_chain = StuffDocumentsChain( llm_chain=llm_chain, document_variable_name="text")docs = loader.load()print(stuff_chain.run(docs)) The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains, such as scientific discovery and generative agents simulation. It also highlights the challenges and limitations of using LLMs in agent systems.Great! We can see that we reproduce the earlier result using the load_summarize_chain.Go deeper‚ÄãYou can easily customize the prompt. You can easily try different LLMs, (e.g., Claude) via the llm parameter.Option 2. Map-Reduce‚ÄãLet's unpack the map reduce approach. For this, we'll first map each document to an individual summary using an LLMChain. Then we'll use a ReduceDocumentsChain to combine those summaries into a single global summary.First, we specfy the LLMChain to use for mapping each document to an individual summary:from langchain.chains.mapreduce import MapReduceChainfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChainllm = ChatOpenAI(temperature=0)# Mapmap_template = """The following is a set of documents{docs}Based on this list of docs, please identify the main themes Helpful Answer:"""map_prompt = PromptTemplate.from_template(map_template)map_chain = LLMChain(llm=llm, prompt=map_prompt)We can also use the Prompt Hub to store and fetch prompts.This will work with your LangSmith API key.For example, see the map prompt here.from
1,866
API key.For example, see the map prompt here.from langchain import hubmap_prompt = hub.pull("rlm/map-prompt")map_chain = LLMChain(llm=llm, prompt=map_prompt)The ReduceDocumentsChain handles taking the document mapping results and reducing them into a single output. It wraps a generic CombineDocumentsChain (like StuffDocumentsChain) but adds the ability to collapse documents before passing it to the CombineDocumentsChain if their cumulative size exceeds token_max. In this example, we can actually re-use our chain for combining our docs to also collapse our docs.So if the cumulative number of tokens in our mapped documents exceeds 4000 tokens, then we'll recursively pass in the documents in batches of < 4000 tokens to our StuffDocumentsChain to create batched summaries. And once those batched summaries are cumulatively less than 4000 tokens, we'll pass them all one last time to the StuffDocumentsChain to create the final summary.# Reducereduce_template = """The following is set of summaries:{doc_summaries}Take these and distill it into a final, consolidated summary of the main themes. Helpful Answer:"""reduce_prompt = PromptTemplate.from_template(reduce_template)# Note we can also get this from the prompt hub, as noted abovereduce_prompt = hub.pull("rlm/map-prompt")reduce_prompt ChatPromptTemplate(input_variables=['docs'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['docs'], template='The following is a set of documents:\n{docs}\nBased on this list of docs, please identify the main themes \nHelpful Answer:'))])# Run chainreduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)# Takes a list of documents, combines them into a single string, and passes this to an LLMChaincombine_documents_chain = StuffDocumentsChain( llm_chain=reduce_chain, document_variable_name="docs")# Combines and iteravely reduces the mapped documentsreduce_documents_chain = ReduceDocumentsChain( # This is final chain that is called.
Open In Colab
Open In Colab ->: API key.For example, see the map prompt here.from langchain import hubmap_prompt = hub.pull("rlm/map-prompt")map_chain = LLMChain(llm=llm, prompt=map_prompt)The ReduceDocumentsChain handles taking the document mapping results and reducing them into a single output. It wraps a generic CombineDocumentsChain (like StuffDocumentsChain) but adds the ability to collapse documents before passing it to the CombineDocumentsChain if their cumulative size exceeds token_max. In this example, we can actually re-use our chain for combining our docs to also collapse our docs.So if the cumulative number of tokens in our mapped documents exceeds 4000 tokens, then we'll recursively pass in the documents in batches of < 4000 tokens to our StuffDocumentsChain to create batched summaries. And once those batched summaries are cumulatively less than 4000 tokens, we'll pass them all one last time to the StuffDocumentsChain to create the final summary.# Reducereduce_template = """The following is set of summaries:{doc_summaries}Take these and distill it into a final, consolidated summary of the main themes. Helpful Answer:"""reduce_prompt = PromptTemplate.from_template(reduce_template)# Note we can also get this from the prompt hub, as noted abovereduce_prompt = hub.pull("rlm/map-prompt")reduce_prompt ChatPromptTemplate(input_variables=['docs'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['docs'], template='The following is a set of documents:\n{docs}\nBased on this list of docs, please identify the main themes \nHelpful Answer:'))])# Run chainreduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)# Takes a list of documents, combines them into a single string, and passes this to an LLMChaincombine_documents_chain = StuffDocumentsChain( llm_chain=reduce_chain, document_variable_name="docs")# Combines and iteravely reduces the mapped documentsreduce_documents_chain = ReduceDocumentsChain( # This is final chain that is called.
1,867
# This is final chain that is called. combine_documents_chain=combine_documents_chain, # If documents exceed context for `StuffDocumentsChain` collapse_documents_chain=combine_documents_chain, # The maximum number of tokens to group documents into. token_max=4000,)Combining our map and reduce chains into one:# Combining documents by mapping a chain over them, then combining resultsmap_reduce_chain = MapReduceDocumentsChain( # Map chain llm_chain=map_chain, # Reduce chain reduce_documents_chain=reduce_documents_chain, # The variable name in the llm_chain to put the documents in document_variable_name="docs", # Return the results of the map steps in the output return_intermediate_steps=False,)text_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=1000, chunk_overlap=0)split_docs = text_splitter.split_documents(docs) Created a chunk of size 1003, which is longer than the specified 1000print(map_reduce_chain.run(split_docs)) Based on the list of documents provided, the main themes can be identified as follows: 1. LLM-powered autonomous agents: The documents discuss the concept of building agents with LLM as their core controller and highlight the potential of LLM beyond generating written content. They explore the capabilities of LLM as a general problem solver. 2. Agent system overview: The documents provide an overview of the components that make up a LLM-powered autonomous agent system, including planning, memory, and tool use. Each component is explained in detail, highlighting its role in enhancing the agent's capabilities. 3. Planning: The documents discuss how the agent breaks down large tasks into smaller subgoals and utilizes self-reflection to improve the quality of its actions and results. 4. Memory: The documents explain the importance of both short-term and long-term memory in an agent system. Short-term memory is utilized for in-context learning, while
Open In Colab
Open In Colab ->: # This is final chain that is called. combine_documents_chain=combine_documents_chain, # If documents exceed context for `StuffDocumentsChain` collapse_documents_chain=combine_documents_chain, # The maximum number of tokens to group documents into. token_max=4000,)Combining our map and reduce chains into one:# Combining documents by mapping a chain over them, then combining resultsmap_reduce_chain = MapReduceDocumentsChain( # Map chain llm_chain=map_chain, # Reduce chain reduce_documents_chain=reduce_documents_chain, # The variable name in the llm_chain to put the documents in document_variable_name="docs", # Return the results of the map steps in the output return_intermediate_steps=False,)text_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=1000, chunk_overlap=0)split_docs = text_splitter.split_documents(docs) Created a chunk of size 1003, which is longer than the specified 1000print(map_reduce_chain.run(split_docs)) Based on the list of documents provided, the main themes can be identified as follows: 1. LLM-powered autonomous agents: The documents discuss the concept of building agents with LLM as their core controller and highlight the potential of LLM beyond generating written content. They explore the capabilities of LLM as a general problem solver. 2. Agent system overview: The documents provide an overview of the components that make up a LLM-powered autonomous agent system, including planning, memory, and tool use. Each component is explained in detail, highlighting its role in enhancing the agent's capabilities. 3. Planning: The documents discuss how the agent breaks down large tasks into smaller subgoals and utilizes self-reflection to improve the quality of its actions and results. 4. Memory: The documents explain the importance of both short-term and long-term memory in an agent system. Short-term memory is utilized for in-context learning, while
1,868
memory is utilized for in-context learning, while long-term memory allows the agent to retain and recall information over extended periods. 5. Tool use: The documents highlight the agent's ability to call external APIs for additional information and resources that may be missing from its pre-trained model weights. This includes accessing current information, executing code, and retrieving proprietary information. 6. Case studies and proof-of-concept examples: The documents provide examples of how LLM-powered autonomous agents can be applied in various domains, such as scientific discovery and generative agent simulations. These case studies serve as examples of the capabilities and potential applications of such agents. 7. Challenges: The documents acknowledge the challenges associated with building and utilizing LLM-powered autonomous agents, although specific challenges are not mentioned in the given set of documents. 8. Citation and references: The documents include a citation and reference section, indicating that the information presented is based on existing research and sources. Overall, the main themes in the provided documents revolve around LLM-powered autonomous agents, their components and capabilities, planning, memory, tool use, case studies, and challenges.Go deeper‚ÄãCustomization As shown above, you can customize the LLMs and prompts for map and reduce stages.Real-world use-caseSee this blog post case-study on analyzing user interactions (questions about LangChain documentation)! The blog post and associated repo also introduce clustering as a means of summarization.This opens up a third path beyond the stuff or map-reduce approaches that is worth considering.Option 3. Refine‚ÄãRefine is similar to map-reduce:The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the
Open In Colab
Open In Colab ->: memory is utilized for in-context learning, while long-term memory allows the agent to retain and recall information over extended periods. 5. Tool use: The documents highlight the agent's ability to call external APIs for additional information and resources that may be missing from its pre-trained model weights. This includes accessing current information, executing code, and retrieving proprietary information. 6. Case studies and proof-of-concept examples: The documents provide examples of how LLM-powered autonomous agents can be applied in various domains, such as scientific discovery and generative agent simulations. These case studies serve as examples of the capabilities and potential applications of such agents. 7. Challenges: The documents acknowledge the challenges associated with building and utilizing LLM-powered autonomous agents, although specific challenges are not mentioned in the given set of documents. 8. Citation and references: The documents include a citation and reference section, indicating that the information presented is based on existing research and sources. Overall, the main themes in the provided documents revolve around LLM-powered autonomous agents, their components and capabilities, planning, memory, tool use, case studies, and challenges.Go deeper‚ÄãCustomization As shown above, you can customize the LLMs and prompts for map and reduce stages.Real-world use-caseSee this blog post case-study on analyzing user interactions (questions about LangChain documentation)! The blog post and associated repo also introduce clustering as a means of summarization.This opens up a third path beyond the stuff or map-reduce approaches that is worth considering.Option 3. Refine‚ÄãRefine is similar to map-reduce:The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the
1,869
inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.This can be easily run with the chain_type="refine" specified.chain = load_summarize_chain(llm, chain_type="refine")chain.run(split_docs) 'The article explores the concept of building autonomous agents powered by large language models (LLMs) and their potential as problem solvers. It discusses different approaches to task decomposition, the integration of self-reflection into LLM-based agents, and the use of external classical planners for long-horizon planning. The new context introduces the Chain of Hindsight (CoH) approach and Algorithm Distillation (AD) for training models to produce better outputs. It also discusses different types of memory and the use of external memory for fast retrieval. The article explores the concept of tool use and introduces the MRKL system and experiments on fine-tuning LLMs to use external tools. It introduces HuggingGPT, a framework that uses ChatGPT as a task planner, and discusses the challenges of using LLM-powered agents in real-world scenarios. The article concludes with case studies on scientific discovery agents and the use of LLM-powered agents in anticancer drug discovery. It also introduces the concept of generative agents that combine LLM with memory, planning, and reflection mechanisms. The conversation samples provided discuss the implementation of a game architecture and the challenges in building LLM-centered agents. The article provides references to related research papers and resources for further exploration.'It's also possible to supply a prompt and return intermediate steps.prompt_template = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)refine_template = ( "Your job is to produce a final summary\n" "We have provided an existing summary up to a certain point: {existing_answer}\n" "We have the opportunity to refine the
Open In Colab
Open In Colab ->: inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.This can be easily run with the chain_type="refine" specified.chain = load_summarize_chain(llm, chain_type="refine")chain.run(split_docs) 'The article explores the concept of building autonomous agents powered by large language models (LLMs) and their potential as problem solvers. It discusses different approaches to task decomposition, the integration of self-reflection into LLM-based agents, and the use of external classical planners for long-horizon planning. The new context introduces the Chain of Hindsight (CoH) approach and Algorithm Distillation (AD) for training models to produce better outputs. It also discusses different types of memory and the use of external memory for fast retrieval. The article explores the concept of tool use and introduces the MRKL system and experiments on fine-tuning LLMs to use external tools. It introduces HuggingGPT, a framework that uses ChatGPT as a task planner, and discusses the challenges of using LLM-powered agents in real-world scenarios. The article concludes with case studies on scientific discovery agents and the use of LLM-powered agents in anticancer drug discovery. It also introduces the concept of generative agents that combine LLM with memory, planning, and reflection mechanisms. The conversation samples provided discuss the implementation of a game architecture and the challenges in building LLM-centered agents. The article provides references to related research papers and resources for further exploration.'It's also possible to supply a prompt and return intermediate steps.prompt_template = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)refine_template = ( "Your job is to produce a final summary\n" "We have provided an existing summary up to a certain point: {existing_answer}\n" "We have the opportunity to refine the
1,870
"We have the opportunity to refine the existing summary" "(only if needed) with some more context below.\n" "------------\n" "{text}\n" "------------\n" "Given the new context, refine the original summary in Italian" "If the context isn't useful, return the original summary.")refine_prompt = PromptTemplate.from_template(refine_template)chain = load_summarize_chain( llm=llm, chain_type="refine", question_prompt=prompt, refine_prompt=refine_prompt, return_intermediate_steps=True, input_key="input_documents", output_key="output_text",)result = chain({"input_documents": split_docs}, return_only_outputs=True)print(result["output_text"]) Il presente articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, tra cui la pianificazione, la memoria e l'uso degli strumenti. Dimostrazioni di concetto come AutoGPT mostrano il potenziale di LLM come risolutore generale di problemi. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorarsi iterativamente. Tuttavia, ci sono sfide da affrontare, come la limitata capacità di contesto che limita l'inclusione di informazioni storiche dettagliate e la difficoltà di pianificazione a lungo termine e decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta, poiché i LLM possono commettere errori di formattazione e mostrare comportamenti ribelli. Nonostante ciò, il sistema AutoGPT viene menzionato come esempio di dimostrazione di concetto che utilizza LLM come controller principale per agenti autonomi. Questo articolo fa riferimento a diverse fonti che esplorano approcci e applicazioni specifiche di LLM nell'ambito degli agenti
Open In Colab
Open In Colab ->: "We have the opportunity to refine the existing summary" "(only if needed) with some more context below.\n" "------------\n" "{text}\n" "------------\n" "Given the new context, refine the original summary in Italian" "If the context isn't useful, return the original summary.")refine_prompt = PromptTemplate.from_template(refine_template)chain = load_summarize_chain( llm=llm, chain_type="refine", question_prompt=prompt, refine_prompt=refine_prompt, return_intermediate_steps=True, input_key="input_documents", output_key="output_text",)result = chain({"input_documents": split_docs}, return_only_outputs=True)print(result["output_text"]) Il presente articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, tra cui la pianificazione, la memoria e l'uso degli strumenti. Dimostrazioni di concetto come AutoGPT mostrano il potenziale di LLM come risolutore generale di problemi. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorarsi iterativamente. Tuttavia, ci sono sfide da affrontare, come la limitata capacità di contesto che limita l'inclusione di informazioni storiche dettagliate e la difficoltà di pianificazione a lungo termine e decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta, poiché i LLM possono commettere errori di formattazione e mostrare comportamenti ribelli. Nonostante ciò, il sistema AutoGPT viene menzionato come esempio di dimostrazione di concetto che utilizza LLM come controller principale per agenti autonomi. Questo articolo fa riferimento a diverse fonti che esplorano approcci e applicazioni specifiche di LLM nell'ambito degli agenti
1,871
specifiche di LLM nell'ambito degli agenti autonomi.print("\n\n".join(result["intermediate_steps"][:3])) This article discusses the concept of building autonomous agents using LLM (large language model) as the core controller. The article explores the different components of an LLM-powered agent system, including planning, memory, and tool use. It also provides examples of proof-of-concept demos and highlights the potential of LLM as a general problem solver. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica
Open In Colab
Open In Colab ->: specifiche di LLM nell'ambito degli agenti autonomi.print("\n\n".join(result["intermediate_steps"][:3])) This article discusses the concept of building autonomous agents using LLM (large language model) as the core controller. The article explores the different components of an LLM-powered agent system, including planning, memory, and tool use. It also provides examples of proof-of-concept demos and highlights the potential of LLM as a general problem solver. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica
1,872
Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.Splitting and summarizing in a single chain‚ÄãFor convenience, we can wrap both the text splitting of our long document and summarizing in a single AnalyzeDocumentsChain.from langchain.chains import AnalyzeDocumentChainsummarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter)summarize_document_chain.run(docs[0]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[17], line 4 1 from langchain.chains import AnalyzeDocumentChain 3 summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter) ----> 4 summarize_document_chain.run(docs[0]) File ~/langchain/libs/langchain/langchain/chains/base.py:496, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 459 """Convenience method for executing chain. 460 461 The main difference between this method and `Chain.__call__` is that this (...) 493 # -> "The temperature in Boise is..." 494 """ 495 # Run at start to make sure this is possible/defined --> 496 _output_key = self._run_output_key 498 if args and not kwargs: 499 if len(args) != 1: File ~/langchain/libs/langchain/langchain/chains/base.py:445, in Chain._run_output_key(self) 442 @property 443 def _run_output_key(self) -> str: 444 if len(self.output_keys) != 1: --> 445 raise ValueError( 446 f"`run` not supported when there is not exactly " 447 f"one output key. Got {self.output_keys}." 448 ) 449 return self.output_keys[0] ValueError: `run` not supported when there is not exactly one output key. Got ['output_text',
Open In Colab
Open In Colab ->: Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.Splitting and summarizing in a single chain‚ÄãFor convenience, we can wrap both the text splitting of our long document and summarizing in a single AnalyzeDocumentsChain.from langchain.chains import AnalyzeDocumentChainsummarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter)summarize_document_chain.run(docs[0]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[17], line 4 1 from langchain.chains import AnalyzeDocumentChain 3 summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter) ----> 4 summarize_document_chain.run(docs[0]) File ~/langchain/libs/langchain/langchain/chains/base.py:496, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 459 """Convenience method for executing chain. 460 461 The main difference between this method and `Chain.__call__` is that this (...) 493 # -> "The temperature in Boise is..." 494 """ 495 # Run at start to make sure this is possible/defined --> 496 _output_key = self._run_output_key 498 if args and not kwargs: 499 if len(args) != 1: File ~/langchain/libs/langchain/langchain/chains/base.py:445, in Chain._run_output_key(self) 442 @property 443 def _run_output_key(self) -> str: 444 if len(self.output_keys) != 1: --> 445 raise ValueError( 446 f"`run` not supported when there is not exactly " 447 f"one output key. Got {self.output_keys}." 448 ) 449 return self.output_keys[0] ValueError: `run` not supported when there is not exactly one output key. Got ['output_text',
1,873
not exactly one output key. Got ['output_text', 'intermediate_steps'].PreviousExtractionNextTaggingUse caseOverviewQuickstartOption 1. StuffGo deeperOption 2. Map-ReduceGo deeperOption 3. RefineSplitting and summarizing in a single chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Open In Colab
Open In Colab ->: not exactly one output key. Got ['output_text', 'intermediate_steps'].PreviousExtractionNextTaggingUse caseOverviewQuickstartOption 1. StuffGo deeperOption 2. Map-ReduceGo deeperOption 3. RefineSplitting and summarizing in a single chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,874
Citing retrieval sources | 🦜️🔗 Langchain
This notebook shows how to use OpenAI functions ability to extract citations from text.
This notebook shows how to use OpenAI functions ability to extract citations from text. ->: Citing retrieval sources | 🦜️🔗 Langchain
1,875
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Citing retrieval sourcesCiting retrieval sourcesThis notebook shows how to use OpenAI functions ability to extract citations from text.from langchain.chains import create_citation_fuzzy_match_chainfrom langchain.chat_models import ChatOpenAI /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(question = "What did the author do during college?"context = """My name is Jason Liu, and I grew up in Toronto Canada but I was born in China.I went to an arts highschool but in university I studied Computational Mathematics and physics. As part of coop I worked at many companies including Stitchfix, Facebook.I also started the Data Science club at the University of Waterloo and I was the president of the club for 2 years."""llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_citation_fuzzy_match_chain(llm)result = chain.run(question=question, context=context)print(result) question='What did the author do during college?' answer=[FactWithEvidence(fact='The author studied Computational Mathematics and physics in university.', substring_quote=['in university I studied
This notebook shows how to use OpenAI functions ability to extract citations from text.
This notebook shows how to use OpenAI functions ability to extract citations from text. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKQA over structured dataSQLRetrieval-augmented generation (RAG)RAG over codeUsing a RetrieverRemembering chat historyAnalyze a single long documentAgent with retrieval toolText splitting by headerRAG over in-memory documentsRAG using local modelsDynamically select from multiple retrieversRetrieving from multiple sourcesCiting retrieval sourcesRetrieve from vector stores directlyInteracting with APIsChatbotsExtractionSummarizationTaggingWeb scrapingSynthetic data generationGraph queryingRetrieval-augmented generation (RAG)Citing retrieval sourcesCiting retrieval sourcesThis notebook shows how to use OpenAI functions ability to extract citations from text.from langchain.chains import create_citation_fuzzy_match_chainfrom langchain.chat_models import ChatOpenAI /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(question = "What did the author do during college?"context = """My name is Jason Liu, and I grew up in Toronto Canada but I was born in China.I went to an arts highschool but in university I studied Computational Mathematics and physics. As part of coop I worked at many companies including Stitchfix, Facebook.I also started the Data Science club at the University of Waterloo and I was the president of the club for 2 years."""llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_citation_fuzzy_match_chain(llm)result = chain.run(question=question, context=context)print(result) question='What did the author do during college?' answer=[FactWithEvidence(fact='The author studied Computational Mathematics and physics in university.', substring_quote=['in university I studied
1,876
substring_quote=['in university I studied Computational Mathematics and physics']), FactWithEvidence(fact='The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years.', substring_quote=['started the Data Science club at the University of Waterloo', 'president of the club for 2 years'])]def highlight(text, span): return ( "..." + text[span[0] - 20 : span[0]] + "*" + "\033[91m" + text[span[0] : span[1]] + "\033[0m" + "*" + text[span[1] : span[1] + 20] + "..." )for fact in result.answer: print("Statement:", fact.fact) for span in fact.get_spans(context): print("Citation:", highlight(context, span)) print() Statement: The author studied Computational Mathematics and physics in university. Citation: ...arts highschool but *in university I studied Computational Mathematics and physics*. As part of coop I... Statement: The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years. Citation: ...x, Facebook. I also *started the Data Science club at the University of Waterloo* and I was the presi... Citation: ...erloo and I was the *president of the club for 2 years*. ... PreviousRetrieving from multiple sourcesNextRetrieve from vector stores directlyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use OpenAI functions ability to extract citations from text.
This notebook shows how to use OpenAI functions ability to extract citations from text. ->: substring_quote=['in university I studied Computational Mathematics and physics']), FactWithEvidence(fact='The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years.', substring_quote=['started the Data Science club at the University of Waterloo', 'president of the club for 2 years'])]def highlight(text, span): return ( "..." + text[span[0] - 20 : span[0]] + "*" + "\033[91m" + text[span[0] : span[1]] + "\033[0m" + "*" + text[span[1] : span[1] + 20] + "..." )for fact in result.answer: print("Statement:", fact.fact) for span in fact.get_spans(context): print("Citation:", highlight(context, span)) print() Statement: The author studied Computational Mathematics and physics in university. Citation: ...arts highschool but *in university I studied Computational Mathematics and physics*. As part of coop I... Statement: The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years. Citation: ...x, Facebook. I also *started the Data Science club at the University of Waterloo* and I was the presi... Citation: ...erloo and I was the *president of the club for 2 years*. ... PreviousRetrieving from multiple sourcesNextRetrieve from vector stores directlyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,877
Different call methods | 🦜️🔗 Langchain
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call:
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call: ->: Different call methods | 🦜️🔗 Langchain
1,878
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toDifferent call methodsDifferent call methodsAll classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:chat = ChatOpenAI(temperature=0)prompt_template = "Tell me a {adjective} joke"llm_chain = LLMChain(llm=chat, prompt=PromptTemplate.from_template(prompt_template))llm_chain(inputs={"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.llm_chain("corny", return_only_outputs=True) {'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.# llm_chain only has one output key, so we can use runllm_chain.output_keys ['text']llm_chain.run({"adjective": "corny"}) 'Why did the tomato turn red? Because it saw the salad dressing!'In the case of one input key, you can input the string directly without specifying the input mapping.# These two are equivalentllm_chain.run({"adjective": "corny"})llm_chain.run("corny")# These two are also equivalentllm_chain("corny")llm_chain({"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call:
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toDifferent call methodsDifferent call methodsAll classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:chat = ChatOpenAI(temperature=0)prompt_template = "Tell me a {adjective} joke"llm_chain = LLMChain(llm=chat, prompt=PromptTemplate.from_template(prompt_template))llm_chain(inputs={"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.llm_chain("corny", return_only_outputs=True) {'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.# llm_chain only has one output key, so we can use runllm_chain.output_keys ['text']llm_chain.run({"adjective": "corny"}) 'Why did the tomato turn red? Because it saw the salad dressing!'In the case of one input key, you can input the string directly without specifying the input mapping.# These two are equivalentllm_chain.run({"adjective": "corny"})llm_chain.run("corny")# These two are also equivalentllm_chain("corny")llm_chain({"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the
1,879
'Why did the tomato turn red? Because it saw the salad dressing!'}Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.PreviousAsync APINextCustom chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call:
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call: ->: 'Why did the tomato turn red? Because it saw the salad dressing!'}Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.PreviousAsync APINextCustom chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,880
Loading from LangChainHub | 🦜️🔗 Langchain
This notebook covers how to load chains from LangChainHub.
This notebook covers how to load chains from LangChainHub. ->: Loading from LangChainHub | 🦜️🔗 Langchain
1,881
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toLoading from LangChainHubLoading from LangChainHubThis notebook covers how to load chains from LangChainHub.from langchain.chains import load_chainchain = load_chain("lc://chains/llm-math/chain.json")chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249'Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import VectorDBQAfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore)query = "What did the president say about Ketanji Brown Jackson"chain.run(query) " The president said that Ketanji Brown Jackson is a
This notebook covers how to load chains from LangChainHub.
This notebook covers how to load chains from LangChainHub. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)Using OpenAI functionsSerializationFoundationalDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsHow toLoading from LangChainHubLoading from LangChainHubThis notebook covers how to load chains from LangChainHub.from langchain.chains import load_chainchain = load_chain("lc://chains/llm-math/chain.json")chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249'Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import VectorDBQAfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore)query = "What did the president say about Ketanji Brown Jackson"chain.run(query) " The president said that Ketanji Brown Jackson is a
1,882
president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence."PreviousDebugging chainsNextAdding memory (state)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to load chains from LangChainHub.
This notebook covers how to load chains from LangChainHub. ->: president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence."PreviousDebugging chainsNextAdding memory (state)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,883
Documents | 🦜�🔗 Langchain
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more. ->: Documents | 🦜�🔗 Langchain
1,884
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsDocumentsThese are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains all implement a common interface:class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents.""" @abstractmethod def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]: """Combine documents into a single string."""📄� StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄� RefineThe Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄� Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.📄� Map re-rankThe map re-rank documents chain runs an
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsDocumentsThese are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains all implement a common interface:class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents.""" @abstractmethod def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]: """Combine documents into a single string."""📄� StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄� RefineThe Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄� Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.📄� Map re-rankThe map re-rank documents chain runs an
1,885
re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousTransformationNextStuffCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more. ->: re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousTransformationNextStuffCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,886
Refine | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsRefineRefineThe Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.PreviousStuffNextMap reduceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer. ->: Refine | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsRefineRefineThe Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.PreviousStuffNextMap reduceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,887
Map re-rank | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsMap re-rankMap re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousMap reduceNextMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.
The map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned. ->: Map re-rank | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsMap re-rankMap re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousMap reduceNextMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,888
Map reduce | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsMap reduceMap reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousRefineNextMap re-rankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.
The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary. ->: Map reduce | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsDocumentsMap reduceMap reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousRefineNextMap re-rankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,889
JSON | 🦜️🔗 Langchain
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: JSON | 🦜️🔗 Langchain
1,890
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersCSVFile DirectoryHTMLJSONMarkdownPDFDocument transformersText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument loadersJSONOn this pageJSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSONLoader uses a specified jq schema to parse the JSON files. It uses the jq python package.
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalDocument loadersCSVFile DirectoryHTMLJSONMarkdownPDFDocument transformersText embedding modelsVector storesRetrieversIndexingChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesRetrievalDocument loadersJSONOn this pageJSONJSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).JSON Lines is a file format where each line is a valid JSON value.The JSONLoader uses a specified jq schema to parse the JSON files. It uses the jq python package.
1,891
Check this manual for a detailed documentation of the jq syntax.#!pip install jqfrom langchain.document_loaders import JSONLoaderimport jsonfrom pathlib import Pathfrom pprint import pprintfile_path='./example_data/facebook_chat.json'data = json.loads(Path(file_path).read_text())pprint(data) {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, 'is_still_participant': True, 'joinable_mode': {'link': '', 'mode': 1}, 'magic_words': [], 'messages': [{'content': 'Bye!', 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}, {'content': 'Oh no worries! Bye', 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}, {'content': 'No Im sorry it was my mistake, the blue one is not ' 'for sale', 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}, {'content': 'I thought you were selling the blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}, {'content': 'Im not interested in this bag. Im interested in the ' 'blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}, {'content': 'Here is $129', 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}, {'photos': [{'creation_timestamp': 1675595059, 'uri': 'url_of_some_picture.jpg'}], 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}, {'content': 'Online is at least $100', 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}, {'content': 'How much do you want?', 'sender_name': 'User 1', 'timestamp_ms':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: Check this manual for a detailed documentation of the jq syntax.#!pip install jqfrom langchain.document_loaders import JSONLoaderimport jsonfrom pathlib import Pathfrom pprint import pprintfile_path='./example_data/facebook_chat.json'data = json.loads(Path(file_path).read_text())pprint(data) {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, 'is_still_participant': True, 'joinable_mode': {'link': '', 'mode': 1}, 'magic_words': [], 'messages': [{'content': 'Bye!', 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}, {'content': 'Oh no worries! Bye', 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}, {'content': 'No Im sorry it was my mistake, the blue one is not ' 'for sale', 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}, {'content': 'I thought you were selling the blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}, {'content': 'Im not interested in this bag. Im interested in the ' 'blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}, {'content': 'Here is $129', 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}, {'photos': [{'creation_timestamp': 1675595059, 'uri': 'url_of_some_picture.jpg'}], 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}, {'content': 'Online is at least $100', 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}, {'content': 'How much do you want?', 'sender_name': 'User 1', 'timestamp_ms':
1,892
'User 1', 'timestamp_ms': 1675594799696}, {'content': 'Goodmorning! $50 is too low.', 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}, {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' 'me know if you are interested. Thanks!', 'sender_name': 'User 1', 'timestamp_ms': 1675549022673}], 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], 'thread_path': 'inbox/User 1 and User 2 chat', 'title': 'User 1 and User 2 chat'}Using JSONLoader‚ÄãSuppose we are interested in extracting the values under the content field within the messages key of the JSON data. This can easily be done through the JSONLoader as shown below.JSON file‚Äãloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[].content', text_content=False)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: 'User 1', 'timestamp_ms': 1675594799696}, {'content': 'Goodmorning! $50 is too low.', 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}, {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' 'me know if you are interested. Thanks!', 'sender_name': 'User 1', 'timestamp_ms': 1675549022673}], 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], 'thread_path': 'inbox/User 1 and User 2 chat', 'title': 'User 1 and User 2 chat'}Using JSONLoader​Suppose we are interested in extracting the values under the content field within the messages key of the JSON data. This can easily be done through the JSONLoader as shown below.JSON file​loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[].content', text_content=False)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source':
1,893
interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]JSON Lines file‚ÄãIf you want to load documents from a JSON Lines file, you pass json_lines=True
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]JSON Lines file​If you want to load documents from a JSON Lines file, you pass json_lines=True
1,894
and specify jq_schema to extract page_content from a single JSON object.file_path = './example_data/facebook_chat_messages.jsonl'pprint(Path(file_path).read_text()) ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' 'worries! Bye"}\n' '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' 'sorry it was my mistake, the blue one is not for sale"}\n')loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.content', text_content=False, json_lines=True)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]Another option is set jq_schema='.' and provide content_key:loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.', content_key='sender_name', json_lines=True)data = loader.load()pprint(data) [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='User 2', metadata={'source':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: and specify jq_schema to extract page_content from a single JSON object.file_path = './example_data/facebook_chat_messages.jsonl'pprint(Path(file_path).read_text()) ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' 'worries! Bye"}\n' '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' 'sorry it was my mistake, the blue one is not for sale"}\n')loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.content', text_content=False, json_lines=True)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]Another option is set jq_schema='.' and provide content_key:loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.', content_key='sender_name', json_lines=True)data = loader.load()pprint(data) [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='User 2', metadata={'source':
1,895
2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]Extracting metadata‚ÄãGenerally, we want to include metadata available in the JSON file into the documents that we create from the content.The following demonstrates how metadata can be extracted using the JSONLoader.There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the page_content can be extracted from..messages[].contentIn the current example, we have to tell the loader to iterate over the records in the messages field. The jq_schema then has to be:.messages[]This allows us to pass the records (dict) into the metadata_func that has to be implemented. The metadata_func is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final Document object.Additionally, we now have to explicitly specify in the loader, via the content_key argument, the key from the record where the value for the page_content needs to be extracted from.# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})]Extracting metadata​Generally, we want to include metadata available in the JSON file into the documents that we create from the content.The following demonstrates how metadata can be extracted using the JSONLoader.There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the page_content can be extracted from..messages[].contentIn the current example, we have to tell the loader to iterate over the records in the messages field. The jq_schema then has to be:.messages[]This allows us to pass the records (dict) into the metadata_func that has to be implemented. The metadata_func is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final Document object.Additionally, we now have to explicitly specify in the loader, via the content_key argument, the key from the record where the value for the page_content needs to be extracted from.# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source':
1,896
no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source':
1,897
much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]Now, you will see that the documents contain the metadata associated with the content we extracted.The metadata_func‚ÄãAs shown above, the metadata_func accepts the default metadata generated by the JSONLoader. This allows full control to the user with respect to how the metadata is formatted.For example, the default metadata contains the source and the seq_num keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the metadata_func to rename the default keys and use the ones from the JSON data.The example below shows how we can modify the source to only contain information of the file source relative to the langchain directory.# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") if "source" in metadata: source = metadata["source"].split("/") source = source[source.index("langchain"):] metadata["source"] = "/".join(source) return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json',
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]Now, you will see that the documents contain the metadata associated with the content we extracted.The metadata_func​As shown above, the metadata_func accepts the default metadata generated by the JSONLoader. This allows full control to the user with respect to how the metadata is formatted.For example, the default metadata contains the source and the seq_num keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the metadata_func to rename the default keys and use the ones from the JSON data.The example below shows how we can modify the source to only contain information of the file source relative to the langchain directory.# Define the metadata extraction function.def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") if "source" in metadata: source = metadata["source"].split("/") source = source[source.index("langchain"):] metadata["source"] = "/".join(source) return metadataloader = JSONLoader( file_path='./example_data/facebook_chat.json',
1,898
file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source':
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func)data = loader.load()pprint(data) [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source':
1,899
is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]Common JSON structures with jq schema​The list below provides a reference to the possible jq_schema the user can use to extract content from the JSON data depending on the structure.JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]jq_schema -> ".[].text"JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}jq_schema -> ".key[].text"JSON -> ["...", "...", "..."]jq_schema -> ".[]"PreviousHTMLNextMarkdownUsing JSONLoaderJSON fileJSON Lines fileExtracting metadataThe metadata_funcCommon JSON structures with jq schemaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). ->: is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]Common JSON structures with jq schema​The list below provides a reference to the possible jq_schema the user can use to extract content from the JSON data depending on the structure.JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]jq_schema -> ".[].text"JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}jq_schema -> ".key[].text"JSON -> ["...", "...", "..."]jq_schema -> ".[]"PreviousHTMLNextMarkdownUsing JSONLoaderJSON fileJSON Lines fileExtracting metadataThe metadata_funcCommon JSON structures with jq schemaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.