Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
2,000
his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Perform a vector similarity search with relevance scores​Execute a pure vector similarity search using the similarity_search_with_relevance_scores() method:docs_and_scores = vector_store.similarity_search_with_relevance_scores(query="What did the president say about Ketanji Brown Jackson", k=4, score_threshold=0.80)from pprint import pprintpprint(docs_and_scores) [(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Perform a vector similarity search with relevance scores​Execute a pure vector similarity search using the similarity_search_with_relevance_scores() method:docs_and_scores = vector_store.similarity_search_with_relevance_scores(query="What did the president say about Ketanji Brown Jackson", k=4, score_threshold=0.80)from pprint import pprintpprint(docs_and_scores) [(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis
2,001
Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}),
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.8441472), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}),
2,002
0.82153815), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.82153815)]Perform a Hybrid Search​Execute hybrid search using the search_type or hybrid_search() method:# Perform a hybrid searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="hybrid")print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: 0.82153815), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'C:\\repos\\langchain-fruocco-acs\\langchain\\docs\\extras\\modules\\state_of_the_union.txt'}), 0.82153815)]Perform a Hybrid Search​Execute hybrid search using the search_type or hybrid_search() method:# Perform a hybrid searchdocs = vector_store.similarity_search( query="What did the president say about Ketanji Brown Jackson", k=3, search_type="hybrid")print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the
2,003
has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.# Perform a hybrid searchdocs = vector_store.hybrid_search( query="What did the president say about Ketanji Brown Jackson", k=3)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Create a new index with custom filterable fieldsfrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights,)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True,
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.# Perform a hybrid searchdocs = vector_store.hybrid_search( query="What did the president say about Ketanji Brown Jackson", k=3)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Create a new index with custom filterable fieldsfrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights,)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True,
2,004
searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ),]index_name: str = "langchain-vector-demo-custom"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embedding_function, fields=fields,)Perform a query with a custom filter# Data in the metadata dictionary with a corresponding field in the index will be added to the index# In this example, the metadata dictionary contains a title, a source and a random field# The title and the source will be added to the index as separate fields, but the random won't. (as it is not defined in the fields list)# The random field will be only stored in the metadata fieldvector_store.add_texts( ["Test 1", "Test 2", "Test 3"], [ {"title": "Title 1", "source": "A", "random": "10290"}, {"title": "Title 2", "source": "A", "random": "48392"}, {"title": "Title 3", "source": "B", "random": "32893"}, ],)res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid")res [Document(page_content='Test 3', metadata={'title': 'Title 3', 'source': 'B', 'random': '32893'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid",
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ),]index_name: str = "langchain-vector-demo-custom"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embedding_function, fields=fields,)Perform a query with a custom filter# Data in the metadata dictionary with a corresponding field in the index will be added to the index# In this example, the metadata dictionary contains a title, a source and a random field# The title and the source will be added to the index as separate fields, but the random won't. (as it is not defined in the fields list)# The random field will be only stored in the metadata fieldvector_store.add_texts( ["Test 1", "Test 2", "Test 3"], [ {"title": "Title 1", "source": "A", "random": "10290"}, {"title": "Title 2", "source": "A", "random": "48392"}, {"title": "Title 3", "source": "B", "random": "32893"}, ],)res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid")res [Document(page_content='Test 3', metadata={'title': 'Title 3', 'source': 'B', 'random': '32893'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]res = vector_store.similarity_search(query="Test 3 source1", k=3, search_type="hybrid",
2,005
3 source1", k=3, search_type="hybrid", filters="source eq 'A'")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]Create a new index with a Scoring Profilefrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights, ScoringFunction, FreshnessScoringFunction, FreshnessScoringParameters)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ), # Additional data field for last doc update SimpleField( name="last_update", type=SearchFieldDataType.DateTimeOffset, searchable=True, filterable=True )]# Adding a custom scoring profile with a freshness functionsc_name = "scoring_profile"sc = ScoringProfile( name=sc_name, text_weights=TextWeights(weights={"title": 5}),
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: 3 source1", k=3, search_type="hybrid", filters="source eq 'A'")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}), Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]Create a new index with a Scoring Profilefrom azure.search.documents.indexes.models import ( SearchableField, SearchField, SearchFieldDataType, SimpleField, ScoringProfile, TextWeights, ScoringFunction, FreshnessScoringFunction, FreshnessScoringParameters)embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)embedding_function = embeddings.embed_queryfields = [ SimpleField( name="id", type=SearchFieldDataType.String, key=True, filterable=True, ), SearchableField( name="content", type=SearchFieldDataType.String, searchable=True, ), SearchField( name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=len(embedding_function("Text")), vector_search_configuration="default", ), SearchableField( name="metadata", type=SearchFieldDataType.String, searchable=True, ), # Additional field to store the title SearchableField( name="title", type=SearchFieldDataType.String, searchable=True, ), # Additional field for filtering on document source SimpleField( name="source", type=SearchFieldDataType.String, filterable=True, ), # Additional data field for last doc update SimpleField( name="last_update", type=SearchFieldDataType.DateTimeOffset, searchable=True, filterable=True )]# Adding a custom scoring profile with a freshness functionsc_name = "scoring_profile"sc = ScoringProfile( name=sc_name, text_weights=TextWeights(weights={"title": 5}),
2,006
5}), function_aggregation="sum", functions=[ FreshnessScoringFunction( field_name="last_update", boost=100, parameters=FreshnessScoringParameters(boosting_duration="P2D"), interpolation="linear" ) ])index_name = "langchain-vector-demo-custom-scoring-profile"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, fields=fields, scoring_profiles = [sc], default_scoring_profile = sc_name)# Adding same data with different last_update to show Scoring Profile effectfrom datetime import datetime, timedeltatoday = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S-00:00')yesterday = (datetime.utcnow() - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%S-00:00')one_month_ago = (datetime.utcnow() - timedelta(days=30)).strftime('%Y-%m-%dT%H:%M:%S-00:00')vector_store.add_texts( ["Test 1", "Test 1", "Test 1"], [ {"title": "Title 1", "source": "source1", "random": "10290", "last_update": today}, {"title": "Title 1", "source": "source1", "random": "48392", "last_update": yesterday}, {"title": "Title 1", "source": "source1", "random": "32893", "last_update": one_month_ago}, ],) ['NjQyNTI5ZmMtNmVkYS00Njg5LTk2ZDgtMjM3OTY4NTJkYzFj', 'M2M0MGExZjAtMjhiZC00ZDkwLThmMTgtODNlN2Y2ZDVkMTMw', 'ZmFhMDE1NzMtMjZjNS00MTFiLTk0MTEtNGRkYjgwYWQwOTI0']res = vector_store.similarity_search(query="Test 1", k=3, search_type="similarity")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '10290', 'last_update': '2023-07-13T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '48392', 'last_update': '2023-07-12T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '32893',
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: 5}), function_aggregation="sum", functions=[ FreshnessScoringFunction( field_name="last_update", boost=100, parameters=FreshnessScoringParameters(boosting_duration="P2D"), interpolation="linear" ) ])index_name = "langchain-vector-demo-custom-scoring-profile"vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, fields=fields, scoring_profiles = [sc], default_scoring_profile = sc_name)# Adding same data with different last_update to show Scoring Profile effectfrom datetime import datetime, timedeltatoday = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S-00:00')yesterday = (datetime.utcnow() - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%S-00:00')one_month_ago = (datetime.utcnow() - timedelta(days=30)).strftime('%Y-%m-%dT%H:%M:%S-00:00')vector_store.add_texts( ["Test 1", "Test 1", "Test 1"], [ {"title": "Title 1", "source": "source1", "random": "10290", "last_update": today}, {"title": "Title 1", "source": "source1", "random": "48392", "last_update": yesterday}, {"title": "Title 1", "source": "source1", "random": "32893", "last_update": one_month_ago}, ],) ['NjQyNTI5ZmMtNmVkYS00Njg5LTk2ZDgtMjM3OTY4NTJkYzFj', 'M2M0MGExZjAtMjhiZC00ZDkwLThmMTgtODNlN2Y2ZDVkMTMw', 'ZmFhMDE1NzMtMjZjNS00MTFiLTk0MTEtNGRkYjgwYWQwOTI0']res = vector_store.similarity_search(query="Test 1", k=3, search_type="similarity")res [Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '10290', 'last_update': '2023-07-13T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '48392', 'last_update': '2023-07-12T10:47:39-00:00'}), Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '32893',
2,007
1', 'source': 'source1', 'random': '32893', 'last_update': '2023-06-13T10:47:39-00:00'})]PreviousAzure Cosmos DBNextBagelDBImport required librariesConfigure OpenAI settingsConfigure vector store settingsCreate embeddings and vector store instancesInsert text and embeddings into vector storePerform a vector similarity searchPerform a vector similarity search with relevance scoresPerform a Hybrid SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: 1', 'source': 'source1', 'random': '32893', 'last_update': '2023-06-13T10:47:39-00:00'})]PreviousAzure Cosmos DBNextBagelDBImport required librariesConfigure OpenAI settingsConfigure vector store settingsCreate embeddings and vector store instancesInsert text and embeddings into vector storePerform a vector similarity searchPerform a vector similarity search with relevance scoresPerform a Hybrid SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,008
Tigris | 🦜️🔗 Langchain
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. ->: Tigris | 🦜️🔗 Langchain
2,009
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTigrisOn this pageTigrisTigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTigrisOn this pageTigrisTigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
2,010
Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.This notebook guides you how to use Tigris as your VectorStorePre requisitesAn OpenAI account. You can sign up for an account hereSign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you've created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.Let's first install our dependencies:pip install tigrisdb openapi-schema-pydantic openai tiktokenWe will load the OpenAI api key and Tigris credentials in our environmentimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["TIGRIS_PROJECT"] = getpass.getpass("Tigris Project Name:")os.environ["TIGRIS_CLIENT_ID"] = getpass.getpass("Tigris Client Id:")os.environ["TIGRIS_CLIENT_SECRET"] = getpass.getpass("Tigris Client Secret:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Tigrisfrom langchain.document_loaders import TextLoaderInitialize Tigris vector store‚ÄãLet's import our test dataset:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_store = Tigris.from_documents(docs, embeddings, index_name="my_embeddings")Similarity Search‚Äãquery = "What did the president say about Ketanji Brown Jackson"found_docs = vector_store.similarity_search(query)print(found_docs)Similarity Search with score (vector distance)‚Äãquery = "What did the president say about Ketanji Brown Jackson"result = vector_store.similarity_search_with_score(query)for doc, score in
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. ->: Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.This notebook guides you how to use Tigris as your VectorStorePre requisitesAn OpenAI account. You can sign up for an account hereSign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you've created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.Let's first install our dependencies:pip install tigrisdb openapi-schema-pydantic openai tiktokenWe will load the OpenAI api key and Tigris credentials in our environmentimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["TIGRIS_PROJECT"] = getpass.getpass("Tigris Project Name:")os.environ["TIGRIS_CLIENT_ID"] = getpass.getpass("Tigris Client Id:")os.environ["TIGRIS_CLIENT_SECRET"] = getpass.getpass("Tigris Client Secret:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Tigrisfrom langchain.document_loaders import TextLoaderInitialize Tigris vector store‚ÄãLet's import our test dataset:loader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_store = Tigris.from_documents(docs, embeddings, index_name="my_embeddings")Similarity Search‚Äãquery = "What did the president say about Ketanji Brown Jackson"found_docs = vector_store.similarity_search(query)print(found_docs)Similarity Search with score (vector distance)‚Äãquery = "What did the president say about Ketanji Brown Jackson"result = vector_store.similarity_search_with_score(query)for doc, score in
2,011
doc, score in result: print(f"document={doc}, score={score}")PreviousTencent Cloud VectorDBNextTimescale Vector (Postgres)Initialize Tigris vector storeSimilarity SearchSimilarity Search with score (vector distance)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. ->: doc, score in result: print(f"document={doc}, score={score}")PreviousTencent Cloud VectorDBNextTimescale Vector (Postgres)Initialize Tigris vector storeSimilarity SearchSimilarity Search with score (vector distance)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,012
Vespa | 🦜️🔗 Langchain
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: Vespa | 🦜️🔗 Langchain
2,013
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesVespaOn this pageVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain vector store.In order to create the vector store, we use pyvespa to create a connection a Vespa service.#!pip install pyvespaUsing the pyvespa package, you can either connect to a Vespa Cloud instance or a local Docker instance.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesVespaOn this pageVespaVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.This notebook shows how to use Vespa.ai as a LangChain vector store.In order to create the vector store, we use pyvespa to create a connection a Vespa service.#!pip install pyvespaUsing the pyvespa package, you can either connect to a Vespa Cloud instance or a local Docker instance.
2,014
Vespa Cloud instance or a local Docker instance. Here, we will create a new Vespa application and deploy that using Docker.Creating a Vespa application‚ÄãFirst, we need to create an application package:from vespa.package import ApplicationPackage, Field, RankProfileapp_package = ApplicationPackage(name="testapp")app_package.schema.add_fields( Field(name="text", type="string", indexing=["index", "summary"], index="enable-bm25"), Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary"], attribute=[f"distance-metric: angular"]),)app_package.schema.add_rank_profile( RankProfile(name="default", first_phase="closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))This sets up a Vespa application with a schema for each document that contains two fields: text for holding the document text and embedding for holding the embedding vector. The text field is set up to use a BM25 index for efficient text retrieval, and we'll see how to use this and hybrid search a bit later.The embedding field is set up with a vector of length 384 to hold the embedding representation of the text. See Vespa's Tensor Guide for more on tensors in Vespa.Lastly, we add a rank profile to instruct Vespa how to order documents. Here we set this up with a nearest neighbor search.Now we can deploy this application locally:from vespa.deployment import VespaDockervespa_docker = VespaDocker()vespa_app = vespa_docker.deploy(application_package=app_package)This deploys and creates a connection to a Vespa service. In case you already have a Vespa application running, for instance in the cloud,
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: Vespa Cloud instance or a local Docker instance. Here, we will create a new Vespa application and deploy that using Docker.Creating a Vespa application‚ÄãFirst, we need to create an application package:from vespa.package import ApplicationPackage, Field, RankProfileapp_package = ApplicationPackage(name="testapp")app_package.schema.add_fields( Field(name="text", type="string", indexing=["index", "summary"], index="enable-bm25"), Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary"], attribute=[f"distance-metric: angular"]),)app_package.schema.add_rank_profile( RankProfile(name="default", first_phase="closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))This sets up a Vespa application with a schema for each document that contains two fields: text for holding the document text and embedding for holding the embedding vector. The text field is set up to use a BM25 index for efficient text retrieval, and we'll see how to use this and hybrid search a bit later.The embedding field is set up with a vector of length 384 to hold the embedding representation of the text. See Vespa's Tensor Guide for more on tensors in Vespa.Lastly, we add a rank profile to instruct Vespa how to order documents. Here we set this up with a nearest neighbor search.Now we can deploy this application locally:from vespa.deployment import VespaDockervespa_docker = VespaDocker()vespa_app = vespa_docker.deploy(application_package=app_package)This deploys and creates a connection to a Vespa service. In case you already have a Vespa application running, for instance in the cloud,
2,015
please refer to the PyVespa application for how to connect.Creating a Vespa vector store‚ÄãNow, let's load some documents:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")Here, we also set up local sentence embedder to transform the text to embedding vectors. One could also use OpenAI embeddings, but the vector length needs to be updated to 1536 to reflect the larger size of that embedding.To feed these to Vespa, we need to configure how the vector store should map to fields in the Vespa application. Then we create the vector store directly from this set of documents:vespa_config = dict( page_content_field="text", embedding_field="embedding", input_field="query_embedding")from langchain.vectorstores import VespaStoredb = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)This creates a Vespa vector store and feeds that set of documents to Vespa. The vector store takes care of calling the embedding function for each document and inserts them into the database.We can now query the vector store:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results[0].page_content)This will use the embedding function given above to create a representation for the query and use that to search Vespa. Note that this will use the default ranking function, which we set up in the application package above. You can use the ranking argument to similarity_search to specify which ranking function to use.Please refer to the pyvespa documentation
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: please refer to the PyVespa application for how to connect.Creating a Vespa vector store‚ÄãNow, let's load some documents:from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")Here, we also set up local sentence embedder to transform the text to embedding vectors. One could also use OpenAI embeddings, but the vector length needs to be updated to 1536 to reflect the larger size of that embedding.To feed these to Vespa, we need to configure how the vector store should map to fields in the Vespa application. Then we create the vector store directly from this set of documents:vespa_config = dict( page_content_field="text", embedding_field="embedding", input_field="query_embedding")from langchain.vectorstores import VespaStoredb = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)This creates a Vespa vector store and feeds that set of documents to Vespa. The vector store takes care of calling the embedding function for each document and inserts them into the database.We can now query the vector store:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)print(results[0].page_content)This will use the embedding function given above to create a representation for the query and use that to search Vespa. Note that this will use the default ranking function, which we set up in the application package above. You can use the ranking argument to similarity_search to specify which ranking function to use.Please refer to the pyvespa documentation
2,016
for more information.This covers the basic usage of the Vespa store in LangChain. Now you can return the results and continue using these in LangChain.Updating documents‚ÄãAn alternative to calling from_documents, you can create the vector store directly and call add_texts from that. This can also be used to update documents:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)result = results[0]result.page_content = "UPDATED: " + result.page_contentdb.add_texts([result.page_content], [result.metadata], result.metadata["id"])results = db.similarity_search(query)print(results[0].page_content)However, the pyvespa library contains methods to manipulate content on Vespa which you can use directly.Deleting documents‚ÄãYou can delete documents using the delete function:result = db.similarity_search(query)# docs[0].metadata["id"] == "id:testapp:testapp::32"db.delete(["32"])result = db.similarity_search(query)# docs[0].metadata["id"] != "id:testapp:testapp::32"Again, the pyvespa connection contains methods to delete documents as well.Returning with scores‚ÄãThe similarity_search method only returns the documents in order of relevancy. To retrieve the actual scores:results = db.similarity_search_with_score(query)result = results[0]# result[1] ~= 0.463This is a result of using the "all-MiniLM-L6-v2" embedding model using the cosine distance function (as given by the argument angular in the application function).Different embedding functions need different distance functions, and Vespa needs to know which distance function to use when orderings documents. Please refer to the documentation on distance functions for more information.As retriever‚ÄãTo use this vector store as a LangChain retriever simply call the as_retriever function, which is a standard vector store
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: for more information.This covers the basic usage of the Vespa store in LangChain. Now you can return the results and continue using these in LangChain.Updating documents‚ÄãAn alternative to calling from_documents, you can create the vector store directly and call add_texts from that. This can also be used to update documents:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query)result = results[0]result.page_content = "UPDATED: " + result.page_contentdb.add_texts([result.page_content], [result.metadata], result.metadata["id"])results = db.similarity_search(query)print(results[0].page_content)However, the pyvespa library contains methods to manipulate content on Vespa which you can use directly.Deleting documents‚ÄãYou can delete documents using the delete function:result = db.similarity_search(query)# docs[0].metadata["id"] == "id:testapp:testapp::32"db.delete(["32"])result = db.similarity_search(query)# docs[0].metadata["id"] != "id:testapp:testapp::32"Again, the pyvespa connection contains methods to delete documents as well.Returning with scores‚ÄãThe similarity_search method only returns the documents in order of relevancy. To retrieve the actual scores:results = db.similarity_search_with_score(query)result = results[0]# result[1] ~= 0.463This is a result of using the "all-MiniLM-L6-v2" embedding model using the cosine distance function (as given by the argument angular in the application function).Different embedding functions need different distance functions, and Vespa needs to know which distance function to use when orderings documents. Please refer to the documentation on distance functions for more information.As retriever‚ÄãTo use this vector store as a LangChain retriever simply call the as_retriever function, which is a standard vector store
2,017
method:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)retriever = db.as_retriever()query = "What did the president say about Ketanji Brown Jackson"results = retriever.get_relevant_documents(query)# results[0].metadata["id"] == "id:testapp:testapp::32"This allows for more general, unstructured, retrieval from the vector store.Metadata‚ÄãIn the example so far, we've only used the text and the embedding for that text. Documents usually contain additional information, which in LangChain is referred to as metadata.Vespa can contain many fields with different types by adding them to the application package:app_package.schema.add_fields( # ... Field(name="date", type="string", indexing=["attribute", "summary"]), Field(name="rating", type="int", indexing=["attribute", "summary"]), Field(name="author", type="string", indexing=["attribute", "summary"]), # ...)vespa_app = vespa_docker.deploy(application_package=app_package)We can add some metadata fields in the documents:# Add metadatafor i, doc in enumerate(docs): doc.metadata["date"] = f"2023-{(i % 12)+1}-{(i % 28)+1}" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["Joe Biden", "Unknown"][min(i, 1)]And let the Vespa vector store know about these fields:vespa_config.update(dict(metadata_fields=["date", "rating", "author"]))Now, when searching for these documents, these fields will be returned. Also, these fields can be filtered on:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, filter="rating > 3")# results[0].metadata["id"] == "id:testapp:testapp::34"# results[0].metadata["author"] == "Unknown"Custom query‚ÄãIf the default behavior of the similarity search does not fit your requirements, you can always provide your own query. Thus, you don't
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: method:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)retriever = db.as_retriever()query = "What did the president say about Ketanji Brown Jackson"results = retriever.get_relevant_documents(query)# results[0].metadata["id"] == "id:testapp:testapp::32"This allows for more general, unstructured, retrieval from the vector store.Metadata‚ÄãIn the example so far, we've only used the text and the embedding for that text. Documents usually contain additional information, which in LangChain is referred to as metadata.Vespa can contain many fields with different types by adding them to the application package:app_package.schema.add_fields( # ... Field(name="date", type="string", indexing=["attribute", "summary"]), Field(name="rating", type="int", indexing=["attribute", "summary"]), Field(name="author", type="string", indexing=["attribute", "summary"]), # ...)vespa_app = vespa_docker.deploy(application_package=app_package)We can add some metadata fields in the documents:# Add metadatafor i, doc in enumerate(docs): doc.metadata["date"] = f"2023-{(i % 12)+1}-{(i % 28)+1}" doc.metadata["rating"] = range(1, 6)[i % 5] doc.metadata["author"] = ["Joe Biden", "Unknown"][min(i, 1)]And let the Vespa vector store know about these fields:vespa_config.update(dict(metadata_fields=["date", "rating", "author"]))Now, when searching for these documents, these fields will be returned. Also, these fields can be filtered on:db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, filter="rating > 3")# results[0].metadata["id"] == "id:testapp:testapp::34"# results[0].metadata["author"] == "Unknown"Custom query‚ÄãIf the default behavior of the similarity search does not fit your requirements, you can always provide your own query. Thus, you don't
2,018
need to provide all of the configuration to the vector store, but rather just write this yourself.First, let's add a BM25 ranking function to our application:from vespa.package import FieldSetapp_package.schema.add_field_set(FieldSet(name="default", fields=["text"]))app_package.schema.add_rank_profile(RankProfile(name="bm25", first_phase="bm25(text)"))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Then, to perform a regular text search based on BM25:query = "What did the president say about Ketanji Brown Jackson"custom_query = { "yql": f"select * from sources * where userQuery()", "query": query, "type": "weakAnd", "ranking": "bm25", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"] == "id:testapp:testapp::32"# results[0][1] ~= 14.384All of the powerful search and query capabilities of Vespa can be used by using a custom query. Please refer to the Vespa documentation on it's Query API for more details.Hybrid search‚ÄãHybrid search means using both a classic term-based search such as BM25 and a vector search and combining the results. We need to create a new rank profile for hybrid search on Vespa:app_package.schema.add_rank_profile( RankProfile(name="hybrid", first_phase="log(bm25(text)) + 0.5 * closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Here, we score each document as a combination of it's BM25 score and its
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: need to provide all of the configuration to the vector store, but rather just write this yourself.First, let's add a BM25 ranking function to our application:from vespa.package import FieldSetapp_package.schema.add_field_set(FieldSet(name="default", fields=["text"]))app_package.schema.add_rank_profile(RankProfile(name="bm25", first_phase="bm25(text)"))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Then, to perform a regular text search based on BM25:query = "What did the president say about Ketanji Brown Jackson"custom_query = { "yql": f"select * from sources * where userQuery()", "query": query, "type": "weakAnd", "ranking": "bm25", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"] == "id:testapp:testapp::32"# results[0][1] ~= 14.384All of the powerful search and query capabilities of Vespa can be used by using a custom query. Please refer to the Vespa documentation on it's Query API for more details.Hybrid search‚ÄãHybrid search means using both a classic term-based search such as BM25 and a vector search and combining the results. We need to create a new rank profile for hybrid search on Vespa:app_package.schema.add_rank_profile( RankProfile(name="hybrid", first_phase="log(bm25(text)) + 0.5 * closeness(field, embedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))vespa_app = vespa_docker.deploy(application_package=app_package)db = VespaStore.from_documents(docs, embedding_function, app=vespa_app, **vespa_config)Here, we score each document as a combination of it's BM25 score and its
2,019
distance score. We can query using a custom query:query = "What did the president say about Ketanji Brown Jackson"query_embedding = embedding_function.embed_query(query)nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(embedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression} and userQuery()", "query": query, "type": "weakAnd", "input.query(query_embedding)": query_embedding, "ranking": "hybrid", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 2.897Native embedders in Vespa‚ÄãUp until this point we've used an embedding function in Python to provide embeddings for the texts. Vespa supports embedding function natively, so you can defer this calculation in to Vespa. One benefit is the ability to use GPUs when embedding documents if you have a large collections.Please refer to Vespa embeddings for more information.First, we need to modify our application package:from vespa.package import Component, Parameterapp_package.components = [ Component(id="hf-embedder", type="hugging-face-embedder", parameters=[ Parameter("transformer-model", {"path": "..."}), Parameter("tokenizer-model", {"url": "..."}), ] )]Field(name="hfembedding", type="tensor<float>(x[384])", is_document_field=False, indexing=["input text", "embed hf-embedder", "attribute", "summary"], attribute=[f"distance-metric: angular"], )app_package.schema.add_rank_profile( RankProfile(name="hf_similarity", first_phase="closeness(field, hfembedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))Please refer to the embeddings documentation on adding embedder models and tokenizers to the application. Note that the hfembedding field
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: distance score. We can query using a custom query:query = "What did the president say about Ketanji Brown Jackson"query_embedding = embedding_function.embed_query(query)nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(embedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression} and userQuery()", "query": query, "type": "weakAnd", "input.query(query_embedding)": query_embedding, "ranking": "hybrid", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 2.897Native embedders in Vespa‚ÄãUp until this point we've used an embedding function in Python to provide embeddings for the texts. Vespa supports embedding function natively, so you can defer this calculation in to Vespa. One benefit is the ability to use GPUs when embedding documents if you have a large collections.Please refer to Vespa embeddings for more information.First, we need to modify our application package:from vespa.package import Component, Parameterapp_package.components = [ Component(id="hf-embedder", type="hugging-face-embedder", parameters=[ Parameter("transformer-model", {"path": "..."}), Parameter("tokenizer-model", {"url": "..."}), ] )]Field(name="hfembedding", type="tensor<float>(x[384])", is_document_field=False, indexing=["input text", "embed hf-embedder", "attribute", "summary"], attribute=[f"distance-metric: angular"], )app_package.schema.add_rank_profile( RankProfile(name="hf_similarity", first_phase="closeness(field, hfembedding)", inputs=[("query(query_embedding)", "tensor<float>(x[384])")] ))Please refer to the embeddings documentation on adding embedder models and tokenizers to the application. Note that the hfembedding field
2,020
includes instructions for embedding using the hf-embedder.Now we can query with a custom query:query = "What did the president say about Ketanji Brown Jackson"nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(internalembedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression}", "input.query(query_embedding)": f"embed(hf-embedder, \"{query}\")", "ranking": "internal_similarity", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 0.630Note that the query here includes an embed instruction to embed the query using the same model as for the documents.Approximate nearest neighbor‚ÄãIn all of the above examples, we've used exact nearest neighbor to find results. However, for large collections of documents this is not feasible as one has to scan through all documents to find the best matches. To avoid this, we can use approximate nearest neighbors.First, we can change the embedding field to create a HNSW index:from vespa.package import HNSWapp_package.schema.add_fields( Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary", "index"], ann=HNSW(distance_metric="angular", max_links_per_node=16, neighbors_to_explore_at_insert=200) ))This creates a HNSW index on the embedding data which allows for efficient searching. With this set, we can easily search using ANN by setting
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: includes instructions for embedding using the hf-embedder.Now we can query with a custom query:query = "What did the president say about Ketanji Brown Jackson"nearest_neighbor_expression = "{targetHits: 4}nearestNeighbor(internalembedding, query_embedding)"custom_query = { "yql": f"select * from sources * where {nearest_neighbor_expression}", "input.query(query_embedding)": f"embed(hf-embedder, \"{query}\")", "ranking": "internal_similarity", "hits": 4}results = db.similarity_search_with_score(query, custom_query=custom_query)# results[0][0].metadata["id"], "id:testapp:testapp::32")# results[0][1] ~= 0.630Note that the query here includes an embed instruction to embed the query using the same model as for the documents.Approximate nearest neighbor‚ÄãIn all of the above examples, we've used exact nearest neighbor to find results. However, for large collections of documents this is not feasible as one has to scan through all documents to find the best matches. To avoid this, we can use approximate nearest neighbors.First, we can change the embedding field to create a HNSW index:from vespa.package import HNSWapp_package.schema.add_fields( Field(name="embedding", type="tensor<float>(x[384])", indexing=["attribute", "summary", "index"], ann=HNSW(distance_metric="angular", max_links_per_node=16, neighbors_to_explore_at_insert=200) ))This creates a HNSW index on the embedding data which allows for efficient searching. With this set, we can easily search using ANN by setting
2,021
the approximate argument to True:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, approximate=True)# results[0][0].metadata["id"], "id:testapp:testapp::32")This covers most of the functionality in the Vespa vector store in LangChain.PreviousSemaDBNextWeaviateReturning with scoresAs retrieverMetadataCustom queryHybrid searchNative embedders in VespaApproximate nearest neighborCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. ->: the approximate argument to True:query = "What did the president say about Ketanji Brown Jackson"results = db.similarity_search(query, approximate=True)# results[0][0].metadata["id"], "id:testapp:testapp::32")This covers most of the functionality in the Vespa vector store in LangChain.PreviousSemaDBNextWeaviateReturning with scoresAs retrieverMetadataCustom queryHybrid searchNative embedders in VespaApproximate nearest neighborCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,022
ScaNN | 🦜️🔗 Langchain
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale. ->: ScaNN | 🦜️🔗 Langchain
2,023
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesScaNNOn this pageScaNNScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner Product Search and also supports other distance functions such as Euclidean distance. The implementation is optimized for x86 processors with AVX2 support. See its Google Research github for more details.Installation​Install ScaNN through pip. Alternatively, you can follow instructions on the ScaNN Website to install from source.pip install scannRetrieval Demo​Below we show how to use ScaNN in conjunction with Huggingface Embeddings.from langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import ScaNNfrom langchain.document_loaders import TextLoaderloader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs =
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesScaNNOn this pageScaNNScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner Product Search and also supports other distance functions such as Euclidean distance. The implementation is optimized for x86 processors with AVX2 support. See its Google Research github for more details.Installation​Install ScaNN through pip. Alternatively, you can follow instructions on the ScaNN Website to install from source.pip install scannRetrieval Demo​Below we show how to use ScaNN in conjunction with Huggingface Embeddings.from langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import ScaNNfrom langchain.document_loaders import TextLoaderloader = TextLoader("state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs =
2,024
chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings import TensorflowHubEmbeddingsembeddings = HuggingFaceEmbeddings()db = ScaNN.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'})RetrievalQA Demo​Next, we demonstrate using ScaNN in conjunction with Google PaLM API.You can obtain an API key from https://developers.generativeai.google/tutorials/setupfrom langchain.chains import RetrievalQAfrom langchain.chat_models import google_palmpalm_client = google_palm.ChatGooglePalm(google_api_key='YOUR_GOOGLE_PALM_API_KEY')qa = RetrievalQA.from_chain_type( llm=palm_client, chain_type="stuff", retriever=db.as_retriever(search_kwargs={'k': 10}))print(qa.run('What did the president say about Ketanji Brown Jackson?')) The president said that Ketanji Brown Jackson is one of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.print(qa.run('What did the president say about Michael Phelps?')) The president did not mention Michael Phelps in his
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale. ->: chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain.embeddings import TensorflowHubEmbeddingsembeddings = HuggingFaceEmbeddings()db = ScaNN.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)docs[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'})RetrievalQA Demo​Next, we demonstrate using ScaNN in conjunction with Google PaLM API.You can obtain an API key from https://developers.generativeai.google/tutorials/setupfrom langchain.chains import RetrievalQAfrom langchain.chat_models import google_palmpalm_client = google_palm.ChatGooglePalm(google_api_key='YOUR_GOOGLE_PALM_API_KEY')qa = RetrievalQA.from_chain_type( llm=palm_client, chain_type="stuff", retriever=db.as_retriever(search_kwargs={'k': 10}))print(qa.run('What did the president say about Ketanji Brown Jackson?')) The president said that Ketanji Brown Jackson is one of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.print(qa.run('What did the president say about Michael Phelps?')) The president did not mention Michael Phelps in his
2,025
president did not mention Michael Phelps in his speech.Save and loading local retrieval index​db.save_local('/tmp/db', 'state_of_union')restored_db = ScaNN.load_local('/tmp/db', embeddings, index_name='state_of_union')PreviousRocksetNextSingleStoreDBInstallationRetrieval DemoRetrievalQA DemoSave and loading local retrieval indexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.
ScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale. ->: president did not mention Michael Phelps in his speech.Save and loading local retrieval index​db.save_local('/tmp/db', 'state_of_union')restored_db = ScaNN.load_local('/tmp/db', embeddings, index_name='state_of_union')PreviousRocksetNextSingleStoreDBInstallationRetrieval DemoRetrievalQA DemoSave and loading local retrieval indexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,026
Google Vertex AI Search | 🦜️🔗 Langchain
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: Google Vertex AI Search | 🦜️🔗 Langchain
2,027
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle Vertex AI SearchOn this pageGoogle Vertex AI SearchVertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.Vertex AI Search lets organizations quickly build generative AI powered search engines for customers and employees. It's underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Vertex AI Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results.Vertex AI Search is available in the Google Cloud Console and via an API for enterprise workflow integration.This notebook demonstrates how to configure Vertex AI Search and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the Python client library and uses it to access the Search Service API.Install pre-requisites​You need to install the google-cloud-discoveryengine package to use the Vertex AI Search retriever.pip install
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle Vertex AI SearchOn this pageGoogle Vertex AI SearchVertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.Vertex AI Search lets organizations quickly build generative AI powered search engines for customers and employees. It's underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Vertex AI Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results.Vertex AI Search is available in the Google Cloud Console and via an API for enterprise workflow integration.This notebook demonstrates how to configure Vertex AI Search and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the Python client library and uses it to access the Search Service API.Install pre-requisites​You need to install the google-cloud-discoveryengine package to use the Vertex AI Search retriever.pip install
2,028
to use the Vertex AI Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Vertex AI Search‚ÄãVertex AI Search is generally available without allowlist as of August 2023.Before you can use the retriever, you need to complete the following steps:Create a search engine and populate an unstructured data store‚ÄãFollow the instructions in the Vertex AI Search Getting Started guide to set up a Google Cloud project and Vertex AI Search.Use the Google Cloud Console to create an unstructured data storePopulate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder.Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Vertex AI Search API‚ÄãThe Vertex AI Search client libraries used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: to use the Vertex AI Search retriever.pip install google-cloud-discoveryengineConfigure access to Google Cloud and Vertex AI Search‚ÄãVertex AI Search is generally available without allowlist as of August 2023.Before you can use the retriever, you need to complete the following steps:Create a search engine and populate an unstructured data store‚ÄãFollow the instructions in the Vertex AI Search Getting Started guide to set up a Google Cloud project and Vertex AI Search.Use the Google Cloud Console to create an unstructured data storePopulate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder.Make sure to use the Cloud Storage (without metadata) option.Set credentials to access Vertex AI Search API‚ÄãThe Vertex AI Search client libraries used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically.
2,029
Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Vertex AI Search retriever‚ÄãThe Vertex AI Search retriever is implemented in the langchain.retriever.GoogleVertexAISearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()Configure and use the Vertex AI Search retriever‚ÄãThe Vertex AI Search retriever is implemented in the langchain.retriever.GoogleVertexAISearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content.
2,030
Depending on the data type used in Vertex AI Search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the documentOnly for Unstructured data sources:‚ÄãAn extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.NOTE: Extractive segments require the Enterprise edition features to be enabled.When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:‚Äãproject_id - Your Google Cloud Project ID.location_id - The location of the data store.global (default)useudata_store_id - The ID of the data store you want to use.Note: This was called search_engine_id in previous versions of the
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: Depending on the data type used in Vertex AI Search (structured or unstructured) the page_content field is populated as follows:Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the documentOnly for Unstructured data sources:‚ÄãAn extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.For more information about extractive segments and extractive answers refer to product documentation.NOTE: Extractive segments require the Enterprise edition features to be enabled.When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.The mandatory parameters are:‚Äãproject_id - Your Google Cloud Project ID.location_id - The location of the data store.global (default)useudata_store_id - The ID of the data store you want to use.Note: This was called search_engine_id in previous versions of the
2,031
search_engine_id in previous versions of the retriever.The project_id and data_store_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and DATA_STORE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments.Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured)max_extractive_answer_count - The maximum number of extractive answers returned in each search result.At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured).max_extractive_segment_count - The maximum number of extractive segments returned in each search result.Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured).filter - The filter expression for the search results based on the metadata associated with the documents in the data store.query_expansion_condition - Specification to determine under which conditions query expansion should occur.0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.2 - Automatic query expansion built by the Search API.engine_data_type - Defines the Vertex AI Search data type0 - Unstructured data1 - Structured dataMigration guide for GoogleCloudEnterpriseSearchRetriever‚ÄãIn previous versions, this retriever was called GoogleCloudEnterpriseSearchRetriever. Some backwards-incompatible changes had to be made to the retriever after the General Availability launch due to changes in the product behavior.To update to the new retriever, make the following changes:Change the import from: from
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: search_engine_id in previous versions of the retriever.The project_id and data_store_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and DATA_STORE_ID.You can also configure a number of optional parameters, including:max_documents - The maximum number of documents used to provide extractive segments or extractive answersget_extractive_answers - By default, the retriever is configured to return extractive segments.Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured)max_extractive_answer_count - The maximum number of extractive answers returned in each search result.At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured).max_extractive_segment_count - The maximum number of extractive segments returned in each search result.Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured).filter - The filter expression for the search results based on the metadata associated with the documents in the data store.query_expansion_condition - Specification to determine under which conditions query expansion should occur.0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.2 - Automatic query expansion built by the Search API.engine_data_type - Defines the Vertex AI Search data type0 - Unstructured data1 - Structured dataMigration guide for GoogleCloudEnterpriseSearchRetriever‚ÄãIn previous versions, this retriever was called GoogleCloudEnterpriseSearchRetriever. Some backwards-incompatible changes had to be made to the retriever after the General Availability launch due to changes in the product behavior.To update to the new retriever, make the following changes:Change the import from: from
2,032
following changes:Change the import from: from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever -> from langchain.retrievers import GoogleVertexAISearchRetriever.Change all class references from GoogleCloudEnterpriseSearchRetriever -> GoogleVertexAISearchRetriever.Upon class initialization, change the search_engine_id parameter name to data_store_id.Configure and use the retriever for unstructured data with extractive segments‚Äãfrom langchain.retrievers import GoogleVertexAISearchRetriever, GoogleVertexAIMultiTurnSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDLOCATION_ID = "<YOUR LOCATION>" # Set to your data store locationDATA_STORE_ID = "<YOUR DATA STORE ID>" # Set to your data store IDretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractive answers‚Äãretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured data‚Äãretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, engine_data_type=1,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retrieve for multi-turn search‚ÄãSearch with follow-ups is based on generative AI models and it is different from the regular unstructured data search.retriever = GoogleVertexAIMultiTurnSearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID,
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: following changes:Change the import from: from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever -> from langchain.retrievers import GoogleVertexAISearchRetriever.Change all class references from GoogleCloudEnterpriseSearchRetriever -> GoogleVertexAISearchRetriever.Upon class initialization, change the search_engine_id parameter name to data_store_id.Configure and use the retriever for unstructured data with extractive segments‚Äãfrom langchain.retrievers import GoogleVertexAISearchRetriever, GoogleVertexAIMultiTurnSearchRetrieverPROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDLOCATION_ID = "<YOUR LOCATION>" # Set to your data store locationDATA_STORE_ID = "<YOUR DATA STORE ID>" # Set to your data store IDretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3,)query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for unstructured data with extractive answers‚Äãretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retriever for structured data‚Äãretriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, engine_data_type=1,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)Configure and use the retrieve for multi-turn search‚ÄãSearch with follow-ups is based on generative AI models and it is different from the regular unstructured data search.retriever = GoogleVertexAIMultiTurnSearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID,
2,033
location_id=LOCATION_ID, data_store_id=DATA_STORE_ID)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousGoogle DriveNextKay.aiInstall pre-requisitesConfigure access to Google Cloud and Vertex AI SearchCreate a search engine and populate an unstructured data storeSet credentials to access Vertex AI Search APIConfigure and use the Vertex AI Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Migration guide for GoogleCloudEnterpriseSearchRetrieverConfigure and use the retriever for unstructured data with extractive segmentsConfigure and use the retriever for unstructured data with extractive answersConfigure and use the retriever for structured dataConfigure and use the retrieve for multi-turn searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud. ->: location_id=LOCATION_ID, data_store_id=DATA_STORE_ID)result = retriever.get_relevant_documents(query)for doc in result: print(doc)PreviousGoogle DriveNextKay.aiInstall pre-requisitesConfigure access to Google Cloud and Vertex AI SearchCreate a search engine and populate an unstructured data storeSet credentials to access Vertex AI Search APIConfigure and use the Vertex AI Search retrieverOnly for Unstructured data sources:The mandatory parameters are:Migration guide for GoogleCloudEnterpriseSearchRetrieverConfigure and use the retriever for unstructured data with extractive segmentsConfigure and use the retriever for unstructured data with extractive answersConfigure and use the retriever for structured dataConfigure and use the retrieve for multi-turn searchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,034
Pinecone Hybrid Search | 🦜️🔗 Langchain
Pinecone is a vector database with broad functionality.
Pinecone is a vector database with broad functionality. ->: Pinecone Hybrid Search | 🦜️🔗 Langchain
2,035
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversPinecone Hybrid SearchOn this pagePinecone Hybrid SearchPinecone is a vector database with broad functionality.This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.The logic of this retriever is taken from this documentationTo use Pinecone, you must have an API key and an Environment.
Pinecone is a vector database with broad functionality.
Pinecone is a vector database with broad functionality. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversPinecone Hybrid SearchOn this pagePinecone Hybrid SearchPinecone is a vector database with broad functionality.This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.The logic of this retriever is taken from this documentationTo use Pinecone, you must have an API key and an Environment.
2,036
Here are the installation instructions.#!pip install pinecone-client pinecone-textimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")from langchain.retrievers import PineconeHybridSearchRetrieveros.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Setup Pinecone‚ÄãYou should only have to do this part once.Note: it's important to make sure that the "context" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's docs.import osimport pineconeapi_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"# find environment next to your API key in the Pinecone consoleenv = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"index_name = "langchain-pinecone-hybrid-search"pinecone.init(api_key=api_key, environment=env)pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test')# create the indexpinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explanation above)Now that its created, we can use itindex = pinecone.Index(index_name)Get embeddings and sparse encoders‚ÄãEmbeddings are used for the dense vectors, tokenizer is used for the sparse vectorfrom langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.For more information about the sparse encoders you can checkout pinecone-text library docs.from pinecone_text.sparse import BM25Encoder# or from pinecone_text.sparse import SpladeEncoder if you
Pinecone is a vector database with broad functionality.
Pinecone is a vector database with broad functionality. ->: Here are the installation instructions.#!pip install pinecone-client pinecone-textimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")from langchain.retrievers import PineconeHybridSearchRetrieveros.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Setup Pinecone‚ÄãYou should only have to do this part once.Note: it's important to make sure that the "context" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's docs.import osimport pineconeapi_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"# find environment next to your API key in the Pinecone consoleenv = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"index_name = "langchain-pinecone-hybrid-search"pinecone.init(api_key=api_key, environment=env)pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test')# create the indexpinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explanation above)Now that its created, we can use itindex = pinecone.Index(index_name)Get embeddings and sparse encoders‚ÄãEmbeddings are used for the dense vectors, tokenizer is used for the sparse vectorfrom langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.For more information about the sparse encoders you can checkout pinecone-text library docs.from pinecone_text.sparse import BM25Encoder# or from pinecone_text.sparse import SpladeEncoder if you
2,037
pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE# use default tf-idf valuesbm25_encoder = BM25Encoder().default()The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:corpus = ["foo", "bar", "world", "hello"]# fit tf-idf values on your corpusbm25_encoder.fit(corpus)# store the values to a json filebm25_encoder.dump("bm25_values.json")# load to your BM25Encoder objectbm25_encoder = BM25Encoder().load("bm25_values.json")Load Retriever​We can now construct the retriever!retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it]Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result[0] Document(page_content='foo', metadata={})PreviousMetalNextPubMedSetup PineconeGet embeddings and sparse encodersLoad RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Pinecone is a vector database with broad functionality.
Pinecone is a vector database with broad functionality. ->: pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE# use default tf-idf valuesbm25_encoder = BM25Encoder().default()The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:corpus = ["foo", "bar", "world", "hello"]# fit tf-idf values on your corpusbm25_encoder.fit(corpus)# store the values to a json filebm25_encoder.dump("bm25_values.json")# load to your BM25Encoder objectbm25_encoder = BM25Encoder().load("bm25_values.json")Load Retriever​We can now construct the retriever!retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)Add texts (if necessary)​We can optionally add texts to the retriever (if they aren't already in there)retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it]Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result[0] Document(page_content='foo', metadata={})PreviousMetalNextPubMedSetup PineconeGet embeddings and sparse encodersLoad RetrieverAdd texts (if necessary)Use RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,038
RePhraseQuery | 🦜️🔗 Langchain
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever. ->: RePhraseQuery | 🦜️🔗 Langchain
2,039
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversRePhraseQueryOn this pageRePhraseQueryRePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.It can be used to pre-process the user input in any way.Example​Setting up​Create a vector store.import loggingfrom langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers import RePhraseQueryRetrieverlogging.basicConfig()logging.getLogger("langchain.retrievers.re_phraser").setLevel(logging.INFO)loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Using the default prompt​The default prompt used in the from_llm classmethod:DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \query from a user and converting it into a query for a vectorstore. \In this process, you strip out information that is not relevant
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversRePhraseQueryOn this pageRePhraseQueryRePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.It can be used to pre-process the user input in any way.Example​Setting up​Create a vector store.import loggingfrom langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models import ChatOpenAIfrom langchain.retrievers import RePhraseQueryRetrieverlogging.basicConfig()logging.getLogger("langchain.retrievers.re_phraser").setLevel(logging.INFO)loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Using the default prompt​The default prompt used in the from_llm classmethod:DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \query from a user and converting it into a query for a vectorstore. \In this process, you strip out information that is not relevant
2,040
you strip out information that is not relevant for \the retrieval task. Here is the user query: {question}"""llm = ChatOpenAI(temperature=0)retriever_from_llm = RePhraseQueryRetriever.from_llm( retriever=vectorstore.as_retriever(), llm=llm)docs = retriever_from_llm.get_relevant_documents( "Hi I'm Lance. What are the approaches to Task Decomposition?") INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows: "approaches to Task Decomposition"docs = retriever_from_llm.get_relevant_documents( "I live in San Francisco. What are the Types of Memory?") INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: "Types of Memory"Custom prompt​from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateQUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with taking a natural languge query from a user and converting it into a query for a vectorstore. In the process, strip out all information that is not relevant for the retrieval task and return a new, simplified question for vectorstore retrieval. The new user query should be in pirate speech. Here is the user query: {question} """,)llm = ChatOpenAI(temperature=0)llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)retriever_from_llm_chain = RePhraseQueryRetriever( retriever=vectorstore.as_retriever(), llm_chain=llm_chain)docs = retriever_from_llm_chain.get_relevant_documents( "Hi I'm Lance. What is Maximum Inner Product Search?") INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog?PreviousPubMedNextSEC filingExampleSetting upUsing the default promptCustom promptCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever. ->: you strip out information that is not relevant for \the retrieval task. Here is the user query: {question}"""llm = ChatOpenAI(temperature=0)retriever_from_llm = RePhraseQueryRetriever.from_llm( retriever=vectorstore.as_retriever(), llm=llm)docs = retriever_from_llm.get_relevant_documents( "Hi I'm Lance. What are the approaches to Task Decomposition?") INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows: "approaches to Task Decomposition"docs = retriever_from_llm.get_relevant_documents( "I live in San Francisco. What are the Types of Memory?") INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: "Types of Memory"Custom prompt​from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateQUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with taking a natural languge query from a user and converting it into a query for a vectorstore. In the process, strip out all information that is not relevant for the retrieval task and return a new, simplified question for vectorstore retrieval. The new user query should be in pirate speech. Here is the user query: {question} """,)llm = ChatOpenAI(temperature=0)llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)retriever_from_llm_chain = RePhraseQueryRetriever( retriever=vectorstore.as_retriever(), llm_chain=llm_chain)docs = retriever_from_llm_chain.get_relevant_documents( "Hi I'm Lance. What is Maximum Inner Product Search?") INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog?PreviousPubMedNextSEC filingExampleSetting upUsing the default promptCustom promptCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,041
Google Drive | 🦜️🔗 Langchain
This notebook covers how to retrieve documents from Google Drive.
This notebook covers how to retrieve documents from Google Drive. ->: Google Drive | 🦜️🔗 Langchain
2,042
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle DriveOn this pageGoogle DriveThis notebook covers how to retrieve documents from Google Drive.Prerequisites​Create a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibRetrieve the Google Docs​By default, the GoogleDriveRetriever expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the GOOGLE_ACCOUNT_FILE environment variable.
This notebook covers how to retrieve documents from Google Drive.
This notebook covers how to retrieve documents from Google Drive. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversGoogle DriveOn this pageGoogle DriveThis notebook covers how to retrieve documents from Google Drive.Prerequisites​Create a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibRetrieve the Google Docs​By default, the GoogleDriveRetriever expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the GOOGLE_ACCOUNT_FILE environment variable.
2,043
The location of token.json uses the same directory (or use the parameter token_path). Note that token.json will be created automatically the first time you use the retriever.GoogleDriveRetriever can retrieve a selection of files with some requests. By default, If you use a folder_id, all the files inside this folder can be retrieved to Document.You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"The special value root is for your personal home.from langchain_googledrive.retrievers import GoogleDriveRetrieverfolder_id="root"#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'retriever = GoogleDriveRetriever( num_results=2,)By default, all files with these mime-type can be converted to Document.text/texttext/plaintext/htmltext/csvtext/markdownimage/pngimage/jpegapplication/epub+zipapplication/pdfapplication/rtfapplication/vnd.google-apps.document (GDoc)application/vnd.google-apps.presentation (GSlide)application/vnd.google-apps.spreadsheet (GSheet)application/vnd.google.colaboratory (Notebook colab)application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)It's possible to update or customize this. See the documentation of GDriveRetriever.But, the corresponding packages must be installed.#!pip install unstructuredretriever.get_relevant_documents("machine learning")You can customize the criteria to select the files. A set of predefined filter are proposed: | template | description |
This notebook covers how to retrieve documents from Google Drive.
This notebook covers how to retrieve documents from Google Drive. ->: The location of token.json uses the same directory (or use the parameter token_path). Note that token.json will be created automatically the first time you use the retriever.GoogleDriveRetriever can retrieve a selection of files with some requests. By default, If you use a folder_id, all the files inside this folder can be retrieved to Document.You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"The special value root is for your personal home.from langchain_googledrive.retrievers import GoogleDriveRetrieverfolder_id="root"#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'retriever = GoogleDriveRetriever( num_results=2,)By default, all files with these mime-type can be converted to Document.text/texttext/plaintext/htmltext/csvtext/markdownimage/pngimage/jpegapplication/epub+zipapplication/pdfapplication/rtfapplication/vnd.google-apps.document (GDoc)application/vnd.google-apps.presentation (GSlide)application/vnd.google-apps.spreadsheet (GSheet)application/vnd.google.colaboratory (Notebook colab)application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)It's possible to update or customize this. See the documentation of GDriveRetriever.But, the corresponding packages must be installed.#!pip install unstructuredretriever.get_relevant_documents("machine learning")You can customize the criteria to select the files. A set of predefined filter are proposed: | template | description |
2,044
| -------------------------------------- | --------------------------------------------------------------------- | | gdrive-all-in-folder | Return all compatible files from a folder_id | | gdrive-query | Search query in all drives | | gdrive-by-name | Search file with name query) | | gdrive-query-in-folder | Search query in folder_id (and sub-folders in _recursive=true) | | gdrive-mime-type | Search a specific mime_type | | gdrive-mime-type-in-folder | Search a specific mime_type in folder_id | | gdrive-query-with-mime-type | Search query with a specific mime_type |
This notebook covers how to retrieve documents from Google Drive.
This notebook covers how to retrieve documents from Google Drive. ->: | -------------------------------------- | --------------------------------------------------------------------- | | gdrive-all-in-folder | Return all compatible files from a folder_id | | gdrive-query | Search query in all drives | | gdrive-by-name | Search file with name query) | | gdrive-query-in-folder | Search query in folder_id (and sub-folders in _recursive=true) | | gdrive-mime-type | Search a specific mime_type | | gdrive-mime-type-in-folder | Search a specific mime_type in folder_id | | gdrive-query-with-mime-type | Search query with a specific mime_type |
2,045
| gdrive-query-with-mime-type-and-folder | Search query with a specific mime_type and in folder_id |retriever = GoogleDriveRetriever( template="gdrive-query", # Search everywhere num_results=2, # But take only 2 documents)for doc in retriever.get_relevant_documents("machine learning"): print("---") print(doc.page_content.strip()[:60]+"...")Else, you can customize the prompt with a specialized PromptTemplatefrom langchain.prompts import PromptTemplateretriever = GoogleDriveRetriever( template=PromptTemplate(input_variables=['query'], # See https://developers.google.com/drive/api/guides/search-files template="(fullText contains '{query}') " "and mimeType='application/vnd.google-apps.document' " "and modifiedTime > '2000-01-01T00:00:00' " "and trashed=false"), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False,)for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60]+"...")Use Google Drive 'description' metadata​Each Google Drive has a description field in metadata (see the details of a file). Use the snippets mode to return the description of selected files.retriever = GoogleDriveRetriever( template='gdrive-mime-type-in-folder', folder_id=folder_id, mime_type='application/vnd.google-apps.document', # Only Google Docs num_results=2, mode='snippets', includeItemsFromAllDrives=False, supportsAllDrives=False,)retriever.get_relevant_documents("machine learning")PreviousGoogle Cloud Enterprise SearchNextGoogle Vertex AI SearchPrerequisitesRetrieve the Google DocsUse Google Drive 'description' metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to retrieve documents from Google Drive.
This notebook covers how to retrieve documents from Google Drive. ->: | gdrive-query-with-mime-type-and-folder | Search query with a specific mime_type and in folder_id |retriever = GoogleDriveRetriever( template="gdrive-query", # Search everywhere num_results=2, # But take only 2 documents)for doc in retriever.get_relevant_documents("machine learning"): print("---") print(doc.page_content.strip()[:60]+"...")Else, you can customize the prompt with a specialized PromptTemplatefrom langchain.prompts import PromptTemplateretriever = GoogleDriveRetriever( template=PromptTemplate(input_variables=['query'], # See https://developers.google.com/drive/api/guides/search-files template="(fullText contains '{query}') " "and mimeType='application/vnd.google-apps.document' " "and modifiedTime > '2000-01-01T00:00:00' " "and trashed=false"), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False,)for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60]+"...")Use Google Drive 'description' metadata​Each Google Drive has a description field in metadata (see the details of a file). Use the snippets mode to return the description of selected files.retriever = GoogleDriveRetriever( template='gdrive-mime-type-in-folder', folder_id=folder_id, mime_type='application/vnd.google-apps.document', # Only Google Docs num_results=2, mode='snippets', includeItemsFromAllDrives=False, supportsAllDrives=False,)retriever.get_relevant_documents("machine learning")PreviousGoogle Cloud Enterprise SearchNextGoogle Vertex AI SearchPrerequisitesRetrieve the Google DocsUse Google Drive 'description' metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,046
Wikipedia | 🦜️🔗 Langchain
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. ->: Wikipedia | 🦜️🔗 Langchain
2,047
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaWikipediaRetriever has these arguments:optional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in WikipediaExamples​Running retriever​from langchain.retrievers import WikipediaRetrieverretriever = WikipediaRetriever()docs
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaWikipediaRetriever has these arguments:optional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in WikipediaExamples​Running retriever​from langchain.retrievers import WikipediaRetrieverretriever = WikipediaRetriever()docs
2,048
= WikipediaRetriever()docs = retriever.get_relevant_documents(query="HUNTER X HUNTER")docs[0].metadata # meta-information of the Document {'title': 'Hunter × Hunter', 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. ->: = WikipediaRetriever()docs = retriever.get_relevant_documents(query="HUNTER X HUNTER")docs[0].metadata # meta-information of the Document {'title': 'Hunter × Hunter', 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series
2,049
Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}docs[0].page_content[:400] # a content of the Document 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vihāra?", # "How big is Wikipédia en français?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is Apify? **Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. ->: Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}docs[0].page_content[:400] # a content of the Document 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vihāra?", # "How big is Wikipédia en français?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is Apify? **Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and
2,050
as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. -> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? **Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. -> **Question**: What is the Abhayagiri Vihāra? **Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. PreviousWeaviate Hybrid SearchNextyou-retrieverInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. ->: as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. -> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? **Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. -> **Question**: What is the Abhayagiri Vihāra? **Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. PreviousWeaviate Hybrid SearchNextyou-retrieverInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,051
Weaviate Hybrid Search | 🦜️🔗 Langchain
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: Weaviate Hybrid Search | 🦜️🔗 Langchain
2,052
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversWeaviate Hybrid SearchWeaviate Hybrid SearchWeaviate is an open-source vector database.Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.This notebook shows how to use Weaviate hybrid search as a LangChain retriever.Set up the retriever:#!pip install weaviate-clientimport weaviateimport osWEAVIATE_URL = os.getenv("WEAVIATE_URL")auth_client_secret = (weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),)client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ "X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"), },)# client.schema.delete_all()from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetrieverfrom langchain.schema import Document retriever = WeaviateHybridSearchRetriever( client=client, index_name="LangChain", text_key="text", attributes=[], create_schema_if_missing=True,)Add some data:docs = [ Document( metadata={ "title": "Embracing The Future: AI Unveiled",
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversWeaviate Hybrid SearchWeaviate Hybrid SearchWeaviate is an open-source vector database.Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.This notebook shows how to use Weaviate hybrid search as a LangChain retriever.Set up the retriever:#!pip install weaviate-clientimport weaviateimport osWEAVIATE_URL = os.getenv("WEAVIATE_URL")auth_client_secret = (weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),)client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ "X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"), },)# client.schema.delete_all()from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetrieverfrom langchain.schema import Document retriever = WeaviateHybridSearchRetriever( client=client, index_name="LangChain", text_key="text", attributes=[], create_schema_if_missing=True,)Add some data:docs = [ Document( metadata={ "title": "Embracing The Future: AI Unveiled",
2,053
"Embracing The Future: AI Unveiled", "author": "Dr. Rebecca Simmons", }, page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.", ), Document( metadata={ "title": "Symbiosis: Harmonizing Humans and AI", "author": "Prof. Jonathan K. Sterling", }, page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.", ), Document( metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"}, page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.", ), Document( metadata={ "title": "Conscious Constructs: The Search for AI Sentience", "author": "Dr. Samuel Cortez", }, page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.", ), Document( metadata={ "title": "Invisible Routines: Hidden AI in Everyday Life", "author": "Prof. Jonathan K. Sterling", }, page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", ),]retriever.add_documents(docs) ['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be', 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907',
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: "Embracing The Future: AI Unveiled", "author": "Dr. Rebecca Simmons", }, page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.", ), Document( metadata={ "title": "Symbiosis: Harmonizing Humans and AI", "author": "Prof. Jonathan K. Sterling", }, page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.", ), Document( metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"}, page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.", ), Document( metadata={ "title": "Conscious Constructs: The Search for AI Sentience", "author": "Dr. Samuel Cortez", }, page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.", ), Document( metadata={ "title": "Invisible Routines: Hidden AI in Everyday Life", "author": "Prof. Jonathan K. Sterling", }, page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", ),]retriever.add_documents(docs) ['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be', 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907',
2,054
'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907', '7ebbdae7-1061-445f-a046-1989f2343d8f', 'c2ab315b-3cab-467f-b23a-b26ed186318d', 'b83765f2-e5d2-471f-8c02-c3350ade4c4f']Do a hybrid search:retriever.get_relevant_documents("the ethical implications of AI") [Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]Do a hybrid search with where filter:retriever.get_relevant_documents( "AI integration in society", where_filter={ "path": ["author"], "operator": "Equal", "valueString": "Prof. Jonathan K. Sterling", },) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines,
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907', '7ebbdae7-1061-445f-a046-1989f2343d8f', 'c2ab315b-3cab-467f-b23a-b26ed186318d', 'b83765f2-e5d2-471f-8c02-c3350ade4c4f']Do a hybrid search:retriever.get_relevant_documents("the ethical implications of AI") [Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]Do a hybrid search with where filter:retriever.get_relevant_documents( "AI integration in society", where_filter={ "path": ["author"], "operator": "Equal", "valueString": "Prof. Jonathan K. Sterling", },) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines,
2,055
how AI has become woven into our routines, often without our explicit realization.", metadata={})]Do a hybrid search with scores:retriever.get_relevant_documents( "AI integration in society", score=True,) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score', 'score': '0.016393442'}}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.0078125 to the score\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.008064516129032258 to the score', 'score': '0.015877016'}}), Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.008064516129032258 to the score\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.0078125 to the score', 'score': '0.015877016'}}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: how AI has become woven into our routines, often without our explicit realization.", metadata={})]Do a hybrid search with scores:retriever.get_relevant_documents( "AI integration in society", score=True,) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score', 'score': '0.016393442'}}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.0078125 to the score\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.008064516129032258 to the score', 'score': '0.015877016'}}), Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.008064516129032258 to the score\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.0078125 to the score', 'score': '0.015877016'}}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats
2,056
ethical considerations, potentials, and threats posed by AI.', metadata={'_additional': {'explainScore': '(vector) [-0.0071824766 -0.0006682752 0.001723625 -0.01897258 -0.0045127636 0.0024410256 -0.020503938 0.013768672 0.009520169 -0.037972264]... \n(hybrid) Document 3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be contributed 0.007936507936507936 to the score', 'score': '0.007936508'}})]PreviousVespaNextWikipediaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Weaviate is an open-source vector database.
Weaviate is an open-source vector database. ->: ethical considerations, potentials, and threats posed by AI.', metadata={'_additional': {'explainScore': '(vector) [-0.0071824766 -0.0006682752 0.001723625 -0.01897258 -0.0045127636 0.0024410256 -0.020503938 0.013768672 0.009520169 -0.037972264]... \n(hybrid) Document 3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be contributed 0.007936507936507936 to the score', 'score': '0.007936508'}})]PreviousVespaNextWikipediaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,057
Kay.ai | 🦜️🔗 Langchain
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: Kay.ai | 🦜️🔗 Langchain
2,058
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversKay.aiOn this pageKay.aiData API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.This notebook shows you how to retrieve datasets supported by Kay. You can currently search SEC Filings and Press Releases of US companies. Visit kay.ai for the latest data drops. For any questions, join our discord or tweet at usInstallationFirst you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.KayAiRetriever has a static .create() factory method that takes the following arguments:dataset_id: string required -- A Kay dataset id. This is a collection of data about a particular entity such as companies, people, or places. For example, try "company" data_type: List[string] optional -- This is a category within a dataset based on its origin or format, such as ‘SEC Filings’, ‘Press Releases’, or ‘Reports’ within the “company” dataset. For example, try ["10-K", "10-Q", "PressRelease"] under the “company” dataset. If left empty, Kay will retrieve the most relevant context
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversKay.aiOn this pageKay.aiData API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.This notebook shows you how to retrieve datasets supported by Kay. You can currently search SEC Filings and Press Releases of US companies. Visit kay.ai for the latest data drops. For any questions, join our discord or tweet at usInstallationFirst you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.KayAiRetriever has a static .create() factory method that takes the following arguments:dataset_id: string required -- A Kay dataset id. This is a collection of data about a particular entity such as companies, people, or places. For example, try "company" data_type: List[string] optional -- This is a category within a dataset based on its origin or format, such as ‘SEC Filings’, ‘Press Releases’, or ‘Reports’ within the “company” dataset. For example, try ["10-K", "10-Q", "PressRelease"] under the “company” dataset. If left empty, Kay will retrieve the most relevant context
2,059
Kay will retrieve the most relevant context across all types.num_contexts: int optional, defaults to 6 -- The number of document chunks to retrieve on each call to get_relevant_documents()ExamplesBasic Retriever Usage​# Setup API keyfrom getpass import getpassKAY_API_KEY = getpass() ········from langchain.retrievers import KayAiRetrieverimport osfrom kay.rag.retrievers import KayRetrieveros.environ["KAY_API_KEY"] = KAY_API_KEYretriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K", "10-Q", "PressRelease"], num_contexts=3)docs = retriever.get_relevant_documents("What were the biggest strategy changes and partnerships made by Roku in 2023??")docs [Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers\nText: Additionally, eMarketer Link: https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.insiderintelligence.com%2Finsights%2Favod-more-than-50-percent-of-us-digital-video-viewers%2F&esheet=53451144&newsitemid=20230712907788&lan=en-US&anchor=eMarketer&index=4&md5=b64dea72bcf6b6379474462602781d83 projects 57% of U.S. digital video users will stream an advertising-based video on demand (AVOD) service this year.\nHaving solutions aimed at driving greater interoperability and automation will help accelerate this growth.\nKey highlights of this collaboration include:\nStreamlined Integration: Roku has now integrated its demand application programming interface (dAPI) with FreeWheel s TV platform. Roku s demand API gives publishers direct, automatic and real-time access to more advertiser demand. This enhanced integration allows for streamlined ad operation workflows and better inventory quality control, both of which will improve publisher yield and revenue.\nSeamless Data Targeting: Publishers can now use Roku platform signals to enable advertisers to
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: Kay will retrieve the most relevant context across all types.num_contexts: int optional, defaults to 6 -- The number of document chunks to retrieve on each call to get_relevant_documents()ExamplesBasic Retriever Usage​# Setup API keyfrom getpass import getpassKAY_API_KEY = getpass() ········from langchain.retrievers import KayAiRetrieverimport osfrom kay.rag.retrievers import KayRetrieveros.environ["KAY_API_KEY"] = KAY_API_KEYretriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K", "10-Q", "PressRelease"], num_contexts=3)docs = retriever.get_relevant_documents("What were the biggest strategy changes and partnerships made by Roku in 2023??")docs [Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers\nText: Additionally, eMarketer Link: https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.insiderintelligence.com%2Finsights%2Favod-more-than-50-percent-of-us-digital-video-viewers%2F&esheet=53451144&newsitemid=20230712907788&lan=en-US&anchor=eMarketer&index=4&md5=b64dea72bcf6b6379474462602781d83 projects 57% of U.S. digital video users will stream an advertising-based video on demand (AVOD) service this year.\nHaving solutions aimed at driving greater interoperability and automation will help accelerate this growth.\nKey highlights of this collaboration include:\nStreamlined Integration: Roku has now integrated its demand application programming interface (dAPI) with FreeWheel s TV platform. Roku s demand API gives publishers direct, automatic and real-time access to more advertiser demand. This enhanced integration allows for streamlined ad operation workflows and better inventory quality control, both of which will improve publisher yield and revenue.\nSeamless Data Targeting: Publishers can now use Roku platform signals to enable advertisers to
2,060
Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. Additionally, FreeWheel and Roku will rely on data clean room technology to enable the activation of additional data sets providing better measurement and monetization to publishers and agencies.', metadata={'_additional': {'id': '962b79e0-f9d1-43ae-9f7a-8a9b42bc7a9a'}, 'chunk_type': 'text', 'chunk_years_mentioned': [], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://www.nasdaq.com/press-release/roku-and-freewheel-announce-strategic-partnership-to-bring-rokus-leading-ad-tech-to', 'data_source_publish_date': '2023-07-12T00:00:00Z', 'data_source_uid': 'a46f309c-705d-3946-96db-87aa4e73261f', 'title': 'ROKU INC | Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers'}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-K 2022-FY \n Form Section: Risk Factors \n Text: nd the Note Regarding Forward Looking Statements.This section of this Annual Report generally discusses fiscal years 2022 and 2021 and year to year comparisons between those years.Discussions of fiscal year 2020 and year to year comparisons between fiscal years 2021 and 2020 that are not included in this Annual Report can be found in Management\'s Discussion and Analysis of Financial Condition and Results of Operations in Part II, Item 7 of our Annual Report for the fiscal year ended December 31, 2021 filed with the SEC on February 18, 2022.Overview Effective as of the fourth quarter of fiscal 2022, we reorganized our reportable segments to better align with management\'s reporting of information reviewed by the Chief Operating Decision Maker ("CODM") for each segment.We renamed our "player" segment to "devices" which now includes our licensing
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. Additionally, FreeWheel and Roku will rely on data clean room technology to enable the activation of additional data sets providing better measurement and monetization to publishers and agencies.', metadata={'_additional': {'id': '962b79e0-f9d1-43ae-9f7a-8a9b42bc7a9a'}, 'chunk_type': 'text', 'chunk_years_mentioned': [], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://www.nasdaq.com/press-release/roku-and-freewheel-announce-strategic-partnership-to-bring-rokus-leading-ad-tech-to', 'data_source_publish_date': '2023-07-12T00:00:00Z', 'data_source_uid': 'a46f309c-705d-3946-96db-87aa4e73261f', 'title': 'ROKU INC | Roku and FreeWheel Announce Strategic Partnership to Bring Roku’s Leading Ad Tech to FreeWheel Customers'}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-K 2022-FY \n Form Section: Risk Factors \n Text: nd the Note Regarding Forward Looking Statements.This section of this Annual Report generally discusses fiscal years 2022 and 2021 and year to year comparisons between those years.Discussions of fiscal year 2020 and year to year comparisons between fiscal years 2021 and 2020 that are not included in this Annual Report can be found in Management\'s Discussion and Analysis of Financial Condition and Results of Operations in Part II, Item 7 of our Annual Report for the fiscal year ended December 31, 2021 filed with the SEC on February 18, 2022.Overview Effective as of the fourth quarter of fiscal 2022, we reorganized our reportable segments to better align with management\'s reporting of information reviewed by the Chief Operating Decision Maker ("CODM") for each segment.We renamed our "player" segment to "devices" which now includes our licensing
2,061
to "devices" which now includes our licensing arrangements with service operators and licensed Roku TV partners in addition to sales of our streaming players, audio products, smart home products and Roku branded TVs that will be designed, made, and sold by us in 2023.Our historical segment information is recast to conform to our new presentation in our financial statements and accompanying notes included in Item 8 of this Annual Report.Our two reportable segments are the platform segment and the devices segment.', metadata={'_additional': {'id': 'a76c5fed-5d63-45a7-b63a-2c30e05140fc'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2020, 2021, 2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-K', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000007', 'data_source_publish_date': '2022-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000007', 'title': 'ROKU INC | 10-K 2022-FY '}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-Q 2023-Q1 \n Form Section: Risk Factors \n Text: Our current and potential partners include TV brands, cable and satellite companies, and telecommunication providers.Under these license arrangements, we generally have limited or no control over the amount and timing of resources these entities dedicate to the relationship.In the past, our licensed Roku TV partners have failed to meet their forecasts and anticipated market launch dates for distributing Roku TV models, and they may fail to meet their forecasts or such launches in the future.If our licensed Roku TV partners or service operator partners fail to meet their forecasts or such launches for distributing licensed streaming devices or choose to deploy competing streaming solutions within their product lines, our business may be harmed.We depend on a small number of content publishers for a
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: to "devices" which now includes our licensing arrangements with service operators and licensed Roku TV partners in addition to sales of our streaming players, audio products, smart home products and Roku branded TVs that will be designed, made, and sold by us in 2023.Our historical segment information is recast to conform to our new presentation in our financial statements and accompanying notes included in Item 8 of this Annual Report.Our two reportable segments are the platform segment and the devices segment.', metadata={'_additional': {'id': 'a76c5fed-5d63-45a7-b63a-2c30e05140fc'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2020, 2021, 2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-K', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000007', 'data_source_publish_date': '2022-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000007', 'title': 'ROKU INC | 10-K 2022-FY '}), Document(page_content='Company Name: ROKU INC \n Company Industry: CABLE & OTHER PAY TELEVISION SERVICES \n Form Title: 10-Q 2023-Q1 \n Form Section: Risk Factors \n Text: Our current and potential partners include TV brands, cable and satellite companies, and telecommunication providers.Under these license arrangements, we generally have limited or no control over the amount and timing of resources these entities dedicate to the relationship.In the past, our licensed Roku TV partners have failed to meet their forecasts and anticipated market launch dates for distributing Roku TV models, and they may fail to meet their forecasts or such launches in the future.If our licensed Roku TV partners or service operator partners fail to meet their forecasts or such launches for distributing licensed streaming devices or choose to deploy competing streaming solutions within their product lines, our business may be harmed.We depend on a small number of content publishers for a
2,062
on a small number of content publishers for a majority of our streaming hours, and if we fail to maintain these relationships, our business could be harmed.*Historically, a small number of content publishers have accounted for a significant portion of the hours streamed on our platform.In the three months ended March 31, 2023, the top three streaming services represented over 50% of all hours streamed in the period.If, for any reason, we cease distributing channels that have historically streamed a large percentage of the aggregate streaming hours on our platform, our streaming hours, our active accounts, or Roku streaming device sales may be adversely affected, and our business may be harmed.', metadata={'_additional': {'id': '2a92b2bb-02a0-4e15-8b64-d7e04078a205'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-Q', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000017', 'data_source_publish_date': '2023-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000017', 'title': 'ROKU INC | 10-Q 2023-Q1 '})]Usage in a chain​OPENAI_API_KEY = getpass() ········os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo")qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What were the biggest strategy changes and partnerships made by Roku in 2023?" # "Where is Wex making the most money in 2023?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What were the biggest strategy changes and partnerships made by
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: on a small number of content publishers for a majority of our streaming hours, and if we fail to maintain these relationships, our business could be harmed.*Historically, a small number of content publishers have accounted for a significant portion of the hours streamed on our platform.In the three months ended March 31, 2023, the top three streaming services represented over 50% of all hours streamed in the period.If, for any reason, we cease distributing channels that have historically streamed a large percentage of the aggregate streaming hours on our platform, our streaming hours, our active accounts, or Roku streaming device sales may be adversely affected, and our business may be harmed.', metadata={'_additional': {'id': '2a92b2bb-02a0-4e15-8b64-d7e04078a205'}, 'chunk_type': 'text', 'chunk_years_mentioned': [2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': '10-Q', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1428439/000142843923000017', 'data_source_publish_date': '2023-01-01T00:00:00Z', 'data_source_uid': '0001428439-23-000017', 'title': 'ROKU INC | 10-Q 2023-Q1 '})]Usage in a chain​OPENAI_API_KEY = getpass() ········os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo")qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What were the biggest strategy changes and partnerships made by Roku in 2023?" # "Where is Wex making the most money in 2023?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What were the biggest strategy changes and partnerships made by
2,063
biggest strategy changes and partnerships made by Roku in 2023? **Answer**: In 2023, Roku made a strategic partnership with FreeWheel to bring Roku's leading ad tech to FreeWheel customers. This partnership aimed to drive greater interoperability and automation in the advertising-based video on demand (AVOD) space. Key highlights of this collaboration include streamlined integration of Roku's demand application programming interface (dAPI) with FreeWheel's TV platform, allowing for better inventory quality control and improved publisher yield and revenue. Additionally, publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. This partnership also involves the use of data clean room technology to enable the activation of additional data sets for better measurement and monetization for publishers and agencies. These partnerships and strategies aim to support Roku's growth in the AVOD market. PreviousGoogle Vertex AI SearchNextkNNBasic Retriever UsageUsage in a chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra.
Data API built for RAG 🕵️ We are curating the world's largest datasets as high-quality embeddings so your AI agents can retrieve context on the fly. Latest models, fast retrieval, and zero infra. ->: biggest strategy changes and partnerships made by Roku in 2023? **Answer**: In 2023, Roku made a strategic partnership with FreeWheel to bring Roku's leading ad tech to FreeWheel customers. This partnership aimed to drive greater interoperability and automation in the advertising-based video on demand (AVOD) space. Key highlights of this collaboration include streamlined integration of Roku's demand application programming interface (dAPI) with FreeWheel's TV platform, allowing for better inventory quality control and improved publisher yield and revenue. Additionally, publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. This partnership also involves the use of data clean room technology to enable the activation of additional data sets for better measurement and monetization for publishers and agencies. These partnerships and strategies aim to support Roku's growth in the AVOD market. PreviousGoogle Vertex AI SearchNextkNNBasic Retriever UsageUsage in a chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,064
Tavily Search API | 🦜️🔗 Langchain
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed. ->: Tavily Search API | 🦜️🔗 Langchain
2,065
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversTavily Search APIOn this pageTavily Search APITavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.Usage​For a full list of allowed arguments, see the official documentation. You can also pass any param to the SDK via a kwargs dictionary.# %pip install tavily-pythonimport osfrom langchain.retrievers.tavily_search_api import TavilySearchAPIRetrieveros.environ["TAVILY_API_KEY"] = "YOUR_API_KEY"retriever = TavilySearchAPIRetriever(k=4)retriever.invoke("what year was breath of the wild released?") [Document(page_content='Nintendo Designer (s) Hidemaro Fujibayashi (director) Eiji Aonuma (producer/group manager) Release date (s) United States of America: • March 3, 2017 Japan: • March 3, 2017 Australia / New Zealand: • March 2, 2017 Belgium: • March 3, 2017 Hong Kong: • Feburary 1, 2018 South Korea: • February 1, 2018 The UK / Ireland: • March 3, 2017 Content ratings', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zelda.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.96994, 'images': None}), Document(page_content='02/01/23 Nintendo Switch Online member exclusive: Save on two digital
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversTavily Search APIOn this pageTavily Search APITavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.Usage​For a full list of allowed arguments, see the official documentation. You can also pass any param to the SDK via a kwargs dictionary.# %pip install tavily-pythonimport osfrom langchain.retrievers.tavily_search_api import TavilySearchAPIRetrieveros.environ["TAVILY_API_KEY"] = "YOUR_API_KEY"retriever = TavilySearchAPIRetriever(k=4)retriever.invoke("what year was breath of the wild released?") [Document(page_content='Nintendo Designer (s) Hidemaro Fujibayashi (director) Eiji Aonuma (producer/group manager) Release date (s) United States of America: • March 3, 2017 Japan: • March 3, 2017 Australia / New Zealand: • March 2, 2017 Belgium: • March 3, 2017 Hong Kong: • Feburary 1, 2018 South Korea: • February 1, 2018 The UK / Ireland: • March 3, 2017 Content ratings', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zelda.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.96994, 'images': None}), Document(page_content='02/01/23 Nintendo Switch Online member exclusive: Save on two digital
2,066
Online member exclusive: Save on two digital games Read more 09/13/22 Out of the Shadows … the Legend of Zelda: Tears of the Kingdom Launches for Nintendo Switch on May...', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.94346, 'images': None}), Document(page_content='Now we finally have a concrete release date of May 12, 2023. The date was announced alongside this brief (and mysterious) new trailer that also confirmed its title: The Legend of Zelda: Tears...', metadata={'title': 'The Legend of Zelda: Tears of the Kingdom: Release Date, Gameplay ... - IGN', 'source': 'https://www.ign.com/articles/the-legend-of-zelda-breath-of-the-wild-2-release-date-gameplay-news-rumors', 'score': 0.94145, 'images': None}), Document(page_content='It was eventually released on March 3, 2017, as a launch game for the Switch and the final Nintendo game for the Wii U. It received widespread acclaim and won numerous Game of the Year accolades. Critics praised its open-ended gameplay, open-world design, and attention to detail, though some criticized its technical performance.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.92102, 'images': None})]PreviousSVMNextTF-IDFUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed. ->: Online member exclusive: Save on two digital games Read more 09/13/22 Out of the Shadows … the Legend of Zelda: Tears of the Kingdom Launches for Nintendo Switch on May...', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.94346, 'images': None}), Document(page_content='Now we finally have a concrete release date of May 12, 2023. The date was announced alongside this brief (and mysterious) new trailer that also confirmed its title: The Legend of Zelda: Tears...', metadata={'title': 'The Legend of Zelda: Tears of the Kingdom: Release Date, Gameplay ... - IGN', 'source': 'https://www.ign.com/articles/the-legend-of-zelda-breath-of-the-wild-2-release-date-gameplay-news-rumors', 'score': 0.94145, 'images': None}), Document(page_content='It was eventually released on March 3, 2017, as a launch game for the Switch and the final Nintendo game for the Wii U. It received widespread acclaim and won numerous Game of the Year accolades. Critics praised its open-ended gameplay, open-world design, and attention to detail, though some criticized its technical performance.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.92102, 'images': None})]PreviousSVMNextTF-IDFUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,067
Arxiv | 🦜️🔗 Langchain
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: Arxiv | 🦜️🔗 Langchain
2,068
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.Installation​First, you need to install arxiv python package.#!pip install arxivArxivRetriever has these arguments:optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.orgExamples​Running retriever​from langchain.retrievers import ArxivRetrieverretriever = ArxivRetriever(load_max_docs=2)docs = retriever.get_relevant_documents(query="1605.08386")docs[0].metadata # meta-information of the Document {'Published':
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.Installation​First, you need to install arxiv python package.#!pip install arxivArxivRetriever has these arguments:optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.orgExamples​Running retriever​from langchain.retrievers import ArxivRetrieverretriever = ArxivRetriever(load_max_docs=2)docs = retriever.get_relevant_documents(query="1605.08386")docs[0].metadata # meta-information of the Document {'Published':
2,069
meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n")
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'Question Answering on facts​# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass() ········import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainmodel = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n")
2,070
print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode
2,071
vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings. The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References: Bortz, A. B., Kalos, M. H., &
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings. The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References: Bortz, A. B., Kalos, M. H., &
2,072
References: Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18. Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. PreviousArcee RetrieverNextAzure Cognitive SearchInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. ->: References: Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18. Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. PreviousArcee RetrieverNextAzure Cognitive SearchInstallationExamplesRunning retrieverQuestion Answering on factsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,073
Arcee Retriever | 🦜️🔗 Langchain
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs). ->: Arcee Retriever | 🦜️🔗 Langchain
2,074
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversArcee RetrieverOn this pageArcee RetrieverThis notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).Setup​Before using ArceeRetriever, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.retrievers import ArceeRetrieverretriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional Configuration​You can also configure ArceeRetriever's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs). ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversArcee RetrieverOn this pageArcee RetrieverThis notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).Setup​Before using ArceeRetriever, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.retrievers import ArceeRetrieverretriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional Configuration​You can also configure ArceeRetriever's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
2,075
Setting the model_kwargs at the object initialization uses the filters and size as default for all the subsequent retrievals.retriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Retrieving documents​You can retrieve relevant documents from uploaded contexts by providing a query. Here's an example:query = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"documents = retriever.get_relevant_documents(query=query)Additional parameters​Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s). Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Music" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Retrieve documents with filters and size paramsdocuments = retriever.get_relevant_documents(query=query, size=5, filters=filters)PreviousAmazon KendraNextArxivSetupAdditional ConfigurationRetrieving documentsAdditional parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee's Domain Adapted Language Models (DALMs). ->: Setting the model_kwargs at the object initialization uses the filters and size as default for all the subsequent retrievals.retriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Retrieving documents​You can retrieve relevant documents from uploaded contexts by providing a query. Here's an example:query = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"documents = retriever.get_relevant_documents(query=query)Additional parameters​Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s). Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Music" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Retrieve documents with filters and size paramsdocuments = retriever.get_relevant_documents(query=query, size=5, filters=filters)PreviousAmazon KendraNextArxivSetupAdditional ConfigurationRetrieving documentsAdditional parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,076
Zep | 🦜️🔗 Langchain
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: Zep | 🦜️🔗 Langchain
2,077
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversZepOn this pageZepRetriever Example for Zep - Fast, scalable building blocks for LLM Apps​More on Zep:​Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.Key Features:Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded on creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversZepOn this pageZepRetriever Example for Zep - Fast, scalable building blocks for LLM Apps​More on Zep:​Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.Key Features:Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Hybrid search over memories and metadata, with messages automatically embedded on creation.Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project: https://github.com/getzep/zep
2,078
Docs: https://docs.getzep.com/Retriever Example​This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.We'll demonstrate:Adding conversation history to the Zep memory store.Vector search over the conversation history.import getpassimport timefrom uuid import uuid4from langchain.memory import ZepMemoryfrom langchain.schema import HumanMessage, AIMessage# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"Initialize the Zep Chat Message History Class and add a chat message history to the memory store​NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authAUTHENTICATE = Falsezep_api_key = Noneif AUTHENTICATE: zep_api_key = getpass.getpass()session_id = str(uuid4()) # This is a unique identifier for the user/session# Initialize the Zep Memory Classzep_memory = ZepMemory(session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key) /Users/danielchalef/dev/langchain/.venv/lib/python3.11/site-packages/zep_python/zep_client.py:86: Warning: You are using an incompatible Zep server version. Please upgrade to {MINIMUM_SERVER_VERSION} or later. self._healthcheck(base_url)# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series"
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: Docs: https://docs.getzep.com/Retriever Example​This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store.We'll demonstrate:Adding conversation history to the Zep memory store.Vector search over the conversation history.import getpassimport timefrom uuid import uuid4from langchain.memory import ZepMemoryfrom langchain.schema import HumanMessage, AIMessage# Set this to your Zep server URLZEP_API_URL = "http://localhost:8000"Initialize the Zep Chat Message History Class and add a chat message history to the memory store​NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever.# Provide your Zep API key. Note that this is optional. See https://docs.getzep.com/deployment/authAUTHENTICATE = Falsezep_api_key = Noneif AUTHENTICATE: zep_api_key = getpass.getpass()session_id = str(uuid4()) # This is a unique identifier for the user/session# Initialize the Zep Memory Classzep_memory = ZepMemory(session_id=session_id, url=ZEP_API_URL, api_key=zep_api_key) /Users/danielchalef/dev/langchain/.venv/lib/python3.11/site-packages/zep_python/zep_client.py:86: Warning: You are using an incompatible Zep server version. Please upgrade to {MINIMUM_SERVER_VERSION} or later. self._healthcheck(base_url)# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series"
2,079
Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), },]for msg in test_history: zep_memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) )time.sleep(2) # Wait for the messages to be embeddedUse the Zep Retriever to vector search over the Zep memory‚ÄãZep provides native vector search over historical conversation memory. Embedding happens automatically.NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.from langchain.retrievers import ZepRetrieverfrom
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), },]for msg in test_history: zep_memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) )time.sleep(2) # Wait for the messages to be embeddedUse the Zep Retriever to vector search over the Zep memory‚ÄãZep provides native vector search over historical conversation memory. Embedding happens automatically.NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated.from langchain.retrievers import ZepRetrieverfrom
2,080
langchain.retrievers import ZepRetrieverfrom langchain.retrievers.zep import SearchTypezep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, api_key=zep_api_key,)await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897589445114136, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8856973648071289, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759557962417603, 'uuid':
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: langchain.retrievers import ZepRetrieverfrom langchain.retrievers.zep import SearchTypezep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, api_key=zep_api_key,)await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897589445114136, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8856973648071289, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759557962417603, 'uuid':
2,081
metadata={'score': 0.7759557962417603, 'uuid': '26aab7b5-34b1-4aff-9be0-7834a7702be4', 'created_at': '2023-10-17T22:53:08.585297Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about Octavia Butler, a specific person.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.760245680809021, 'uuid': 'ee4aa8e9-9913-4e69-a2a5-77a85294d24e', 'created_at': '2023-10-17T22:53:08.611466Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596070170402527, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]We can also use the Zep sync API to retrieve results:zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: metadata={'score': 0.7759557962417603, 'uuid': '26aab7b5-34b1-4aff-9be0-7834a7702be4', 'created_at': '2023-10-17T22:53:08.585297Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about Octavia Butler, a specific person.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.760245680809021, 'uuid': 'ee4aa8e9-9913-4e69-a2a5-77a85294d24e', 'created_at': '2023-10-17T22:53:08.611466Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596070170402527, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]We can also use the Zep sync API to retrieve results:zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Parable of the Sower is a
2,082
[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759560942649841, 'uuid': '26aab7b5-34b1-4aff-9be0-7834a7702be4', 'created_at': '2023-10-17T22:53:08.585297Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about Octavia Butler, a specific person.'}},
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759560942649841, 'uuid': '26aab7b5-34b1-4aff-9be0-7834a7702be4', 'created_at': '2023-10-17T22:53:08.585297Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}], 'intent': 'The subject is asking for information about Octavia Butler, a specific person.'}},
2,083
about Octavia Butler, a specific person.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602507472038269, 'uuid': 'ee4aa8e9-9913-4e69-a2a5-77a85294d24e', 'created_at': '2023-10-17T22:53:08.611466Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is stating a fact about Octavia Butler's contemporaries, including Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ."}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]Reranking using MMR (Maximal Marginal Relevance)‚ÄãZep has native, SIMD-accelerated support for reranking results using MMR. This is useful for removing redundancy in results.zep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, api_key=zep_api_key, search_type=SearchType.mmr,
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: about Octavia Butler, a specific person.'}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602507472038269, 'uuid': 'ee4aa8e9-9913-4e69-a2a5-77a85294d24e', 'created_at': '2023-10-17T22:53:08.611466Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': "The subject is stating a fact about Octavia Butler's contemporaries, including Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ."}}, 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18})]Reranking using MMR (Maximal Marginal Relevance)‚ÄãZep has native, SIMD-accelerated support for reranking results using MMR. This is useful for removing redundancy in results.zep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, api_key=zep_api_key, search_type=SearchType.mmr,
2,084
search_type=SearchType.mmr, mmr_lambda=0.5,)await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") /Users/danielchalef/dev/langchain/.venv/lib/python3.11/site-packages/zep_python/zep_client.py:86: Warning: You are using an incompatible Zep server version. Please upgrade to {MINIMUM_SERVER_VERSION} or later. self._healthcheck(base_url) [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496200799942017, 'uuid': '1047ff15-96f1-4101-bb0f-9ed073b8081d', 'created_at': '2023-10-17T22:53:08.596614Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is inquiring about the books of the person referred to as "hers" that have been made into movies.'}}, 'token_count': 11}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at':
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: search_type=SearchType.mmr, mmr_lambda=0.5,)await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") /Users/danielchalef/dev/langchain/.venv/lib/python3.11/site-packages/zep_python/zep_client.py:86: Warning: You are using an incompatible Zep server version. Please upgrade to {MINIMUM_SERVER_VERSION} or later. self._healthcheck(base_url) [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}]}}, 'token_count': 56}), Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496200799942017, 'uuid': '1047ff15-96f1-4101-bb0f-9ed073b8081d', 'created_at': '2023-10-17T22:53:08.596614Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is inquiring about the books of the person referred to as "hers" that have been made into movies.'}}, 'token_count': 11}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at':
2,085
'2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575579881668091, 'uuid': 'b2dfd1f7-cac6-4e37-94ea-7c15b0a5af2c', 'created_at': '2023-10-17T22:53:08.606283Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is asking about the people who were contemporaries of someone else.'}}, 'token_count': 8})]Using metadata filters to refine search results‚ÄãZep supports filtering results by metadata. This is useful for filtering results by entity type, or other metadata.More information here: https://docs.getzep.com/sdk/search_query/filter = {"where": {"jsonpath": '$[*] ? (@.Label == "WORK_OF_ART")'}}await zep_retriever.aget_relevant_documents( "Who wrote Parable of the Sower?", metadata=filter) [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.',
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575579881668091, 'uuid': 'b2dfd1f7-cac6-4e37-94ea-7c15b0a5af2c', 'created_at': '2023-10-17T22:53:08.606283Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is asking about the people who were contemporaries of someone else.'}}, 'token_count': 8})]Using metadata filters to refine search results‚ÄãZep supports filtering results by metadata. This is useful for filtering results by entity type, or other metadata.More information here: https://docs.getzep.com/sdk/search_query/filter = {"where": {"jsonpath": '$[*] ? (@.Label == "WORK_OF_ART")'}}await zep_retriever.aget_relevant_documents( "Who wrote Parable of the Sower?", metadata=filter) [Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.',
2,086
environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}], 'intent': 'None'}}, 'token_count': 56}), Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496200799942017, 'uuid': '1047ff15-96f1-4101-bb0f-9ed073b8081d', 'created_at': '2023-10-17T22:53:08.596614Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is inquiring about the books of the person referred to as "hers" that have been made into movies.'}}, 'token_count': 11}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}], 'intent': 'The subject is requesting a brief summary or description of Butler\'s book, "Parable of the Sower."'}}, 'token_count': 23}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid':
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: environmental disasters, poverty, and violence.', metadata={'score': 0.8897120952606201, 'uuid': 'f99ecec3-f778-4bfd-8bb7-c3c00ae919c0', 'created_at': '2023-10-17T22:53:08.664849Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'GPE', 'Matches': [{'End': 20, 'Start': 15, 'Text': 'Sower'}], 'Name': 'Sower'}, {'Label': 'PERSON', 'Matches': [{'End': 65, 'Start': 51, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}, {'Label': 'DATE', 'Matches': [{'End': 84, 'Start': 80, 'Text': '1993'}], 'Name': '1993'}, {'Label': 'PERSON', 'Matches': [{'End': 124, 'Start': 110, 'Text': 'Lauren Olamina'}], 'Name': 'Lauren Olamina'}], 'intent': 'None'}}, 'token_count': 56}), Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496200799942017, 'uuid': '1047ff15-96f1-4101-bb0f-9ed073b8081d', 'created_at': '2023-10-17T22:53:08.596614Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is inquiring about the books of the person referred to as "hers" that have been made into movies.'}}, 'token_count': 11}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857351541519165, 'uuid': 'f6aba470-f15f-4b22-84ef-1c0d315a31de', 'created_at': '2023-10-17T22:53:08.642659Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}], 'intent': 'The subject is requesting a brief summary or description of Butler\'s book, "Parable of the Sower."'}}, 'token_count': 23}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595934867858887, 'uuid':
2,087
metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is providing a suggestion or recommendation for the person to read Ursula K. Le Guin or Joanna Russ.'}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575579881668091, 'uuid': 'b2dfd1f7-cac6-4e37-94ea-7c15b0a5af2c', 'created_at': '2023-10-17T22:53:08.606283Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is asking about the people who were contemporaries of someone else.'}}, 'token_count': 8})]Previousyou-retrieverNextToolsRetriever Example for Zep - Fast, scalable building blocks for LLM AppsMore on Zep:Retriever ExampleInitialize the Zep Chat Message History Class and add a chat message history to the memory storeUse the Zep Retriever to vector search over the Zep memoryReranking using MMR (Maximal Marginal Relevance)Using metadata filters to refine search resultsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps
Retriever Example for Zep - Fast, scalable building blocks for LLM Apps ->: metadata={'score': 0.7595934867858887, 'uuid': '9fa630e6-0b17-4d77-80b0-ba99249850c0', 'created_at': '2023-10-17T22:53:08.630731Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}], 'intent': 'The subject is providing a suggestion or recommendation for the person to read Ursula K. Le Guin or Joanna Russ.'}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575579881668091, 'uuid': 'b2dfd1f7-cac6-4e37-94ea-7c15b0a5af2c', 'created_at': '2023-10-17T22:53:08.606283Z', 'updated_at': '0001-01-01T00:00:00Z', 'role': 'human', 'metadata': {'system': {'intent': 'The subject is asking about the people who were contemporaries of someone else.'}}, 'token_count': 8})]Previousyou-retrieverNextToolsRetriever Example for Zep - Fast, scalable building blocks for LLM AppsMore on Zep:Retriever ExampleInitialize the Zep Chat Message History Class and add a chat message history to the memory storeUse the Zep Retriever to vector search over the Zep memoryReranking using MMR (Maximal Marginal Relevance)Using metadata filters to refine search resultsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,088
Azure Cognitive Search | 🦜️🔗 Langchain
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: Azure Cognitive Search | 🦜️🔗 Langchain
2,089
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.Set up Azure Cognitive Search​To set up ACS, please follow the instructions here.Please notethe name of your ACS service, the name of your ACS index,your API key.Your API key can be either Admin
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversAzure Cognitive SearchOn this pageAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)This notebook shows how to use Azure Cognitive Search (ACS) within LangChain.Set up Azure Cognitive Search​To set up ACS, please follow the instructions here.Please notethe name of your ACS service, the name of your ACS index,your API key.Your API key can be either Admin
2,090
API key.Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.Using the Azure Cognitive Search Retriever​import osfrom langchain.retrievers import AzureCognitiveSearchRetrieverSet Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = "<YOUR_ACS_INDEX_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"Create the Retrieverretriever = AzureCognitiveSearchRetriever(content_key="content", top_k=10)Now you can use retrieve documents from Azure Cognitive Searchretriever.get_relevant_documents("what is langchain")You can change the number of results returned with the top_k parameter. The default value is None, which returns all results.PreviousArxivNextBM25Set up Azure Cognitive SearchUsing the Azure Cognitive Search RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. ->: API key.Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.Using the Azure Cognitive Search Retriever​import osfrom langchain.retrievers import AzureCognitiveSearchRetrieverSet Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever).os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = "<YOUR_ACS_INDEX_NAME>"os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>"Create the Retrieverretriever = AzureCognitiveSearchRetriever(content_key="content", top_k=10)Now you can use retrieve documents from Azure Cognitive Searchretriever.get_relevant_documents("what is langchain")You can change the number of results returned with the top_k parameter. The default value is None, which returns all results.PreviousArxivNextBM25Set up Azure Cognitive SearchUsing the Azure Cognitive Search RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,091
BM25 | 🦜️🔗 Langchain
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. ->: BM25 | 🦜️🔗 Langchain
2,092
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversBM25On this pageBM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package.# !pip install rank_bm25from langchain.retrievers import BM25Retriever /workspaces/langchain/.venv/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(Create New Retriever with Texts​retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents​You can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = BM25Retriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}),
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversBM25On this pageBM25BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package.# !pip install rank_bm25from langchain.retrievers import BM25Retriever /workspaces/langchain/.venv/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(Create New Retriever with Texts​retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])Create a New Retriever with Documents​You can now create a new retriever with the documents you created.from langchain.schema import Documentretriever = BM25Retriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])Use Retriever​We can now use the retriever!result = retriever.get_relevant_documents("foo")result [Document(page_content='foo', metadata={}),
2,093
[Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousAzure Cognitive SearchNextChaindeskCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.
BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. ->: [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]PreviousAzure Cognitive SearchNextChaindeskCreate New Retriever with TextsCreate a New Retriever with DocumentsUse RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,094
you-retriever | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversyou-retrieverOn this pageyou-retrieverUsing the You.com Retriever​The retriever from You.com is good for retrieving lots of text. We return multiple of the best text snippets per URL we find to be relevant.First you just need to initialize the retrieverfrom langchain.retrievers.you_retriever import YouRetrieverfrom langchain.chains import RetrievalQAfrom langchain.llms import OpenAIyr = YouRetriever()qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=yr)query = "what starting ohio state quarterback most recently went their entire college career without beating Michigan?"qa.run(query)PreviousWikipediaNextZepUsing the You.com RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Using the You.com Retriever
Using the You.com Retriever ->: you-retriever | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversyou-retrieverOn this pageyou-retrieverUsing the You.com Retriever​The retriever from You.com is good for retrieving lots of text. We return multiple of the best text snippets per URL we find to be relevant.First you just need to initialize the retrieverfrom langchain.retrievers.you_retriever import YouRetrieverfrom langchain.chains import RetrievalQAfrom langchain.llms import OpenAIyr = YouRetriever()qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=yr)query = "what starting ohio state quarterback most recently went their entire college career without beating Michigan?"qa.run(query)PreviousWikipediaNextZepUsing the You.com RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,095
PubMed | 🦜️🔗 Langchain
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: PubMed | 🦜️🔗 Langchain
2,096
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversPubMedPubMedPubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.This notebook goes over how to use PubMed as a retrieverfrom langchain.retrievers import PubMedRetrieverretriever = PubMedRetriever()retriever.get_relevant_documents("chatgpt") [Document(page_content='', metadata={'uid': '37549050', 'Title': 'ChatGPT: "To Be or Not to Be" in Bikini Bottom.', 'Published': '--', 'Copyright Information': ''}), Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills,
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversPubMedPubMedPubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.This notebook goes over how to use PubMed as a retrieverfrom langchain.retrievers import PubMedRetrieverretriever = PubMedRetriever()retriever.get_relevant_documents("chatgpt") [Document(page_content='', metadata={'uid': '37549050', 'Title': 'ChatGPT: "To Be or Not to Be" in Bikini Bottom.', 'Published': '--', 'Copyright Information': ''}), Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills,
2,097
teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.", metadata={'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}), Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.", metadata={'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}), Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright
2,098
'Published': '2023-08-07', 'Copyright Information': ''})]PreviousPinecone Hybrid SearchNextRePhraseQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. ->: 'Published': '2023-08-07', 'Copyright Information': ''})]PreviousPinecone Hybrid SearchNextRePhraseQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
2,099
SEC filing | 🦜�🔗 Langchain
The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.
The SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes. ->: SEC filing | 🦜�🔗 Langchain