Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
1,000 | chat message history. Check out the other Momento langchain integrations to learn more.To learn more about the Momento Vector Index, visit the Momento Documentation.PreviousMilvusNextMongoDB AtlasInstall prerequisitesEnter API keysMomento: for indexing dataOpenAI: for text embeddingsAsk a question directly against the indexUse an LLM to generate fluent answersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. | MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. ->: chat message history. Check out the other Momento langchain integrations to learn more.To learn more about the Momento Vector Index, visit the Momento Documentation.PreviousMilvusNextMongoDB AtlasInstall prerequisitesEnter API keysMomento: for indexing dataOpenAI: for text embeddingsAsk a question directly against the indexUse an LLM to generate fluent answersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,001 | MyScale | ü¶úÔ∏èüîó Langchain | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. ->: MyScale | ü¶úÔ∏èüîó Langchain |
1,002 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMyScaleOn this pageMyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. This notebook shows how to use functionality related to the MyScale vector database.Setting up environments‚Äãpip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["OPENAI_API_BASE"] = getpass.getpass("OpenAI Base:")os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale Host:")os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")There are two ways to set up parameters for myscale index.Environment VariablesBefore you run the app, please set the environment variable with export: | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMyScaleOn this pageMyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. This notebook shows how to use functionality related to the MyScale vector database.Setting up environments‚Äãpip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["OPENAI_API_BASE"] = getpass.getpass("OpenAI Base:")os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale Host:")os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")There are two ways to set up parameters for myscale index.Environment VariablesBefore you run the app, please set the environment variable with export: |
1,003 | export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...You can easily find your account, password and other info on our SaaS. For details please refer to this documentEvery attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.Create MyScaleSettings object with parameters```pythonfrom langchain.vectorstores import MyScale, MyScaleSettingsconfig = MyScaleSetting(host="<your-backend-url>", port=8443, ...)index = MyScale(embedding_function, config)index.add_documents(...)```from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MyScalefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}docsearch = MyScale.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.66it/s]print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. ->: export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...You can easily find your account, password and other info on our SaaS. For details please refer to this documentEvery attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.Create MyScaleSettings object with parameters```pythonfrom langchain.vectorstores import MyScale, MyScaleSettingsconfig = MyScaleSetting(host="<your-backend-url>", port=8443, ...)index = MyScale(embedding_function, config)index.add_documents(...)```from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MyScalefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}docsearch = MyScale.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.66it/s]print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on |
1,004 | a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Get connection info and data schema​print(str(docsearch))Filtering​You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you customized your column_map under your setting, you search with filter like this:from langchain.vectorstores import MyScale, MyScaleSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = MyScale.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.68it/s]Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.229655921459198 {'doc_id': 0} Madam Speaker, Madam... 0.24506962299346924 {'doc_id': 8} And so many families... 0.24786919355392456 {'doc_id': 1} Groups of citizens b... 0.24875116348266602 {'doc_id': 6} And I’m taking robus...Deleting your data​You can either drop the table with .drop() method or partially delete your data with .delete() method.# use directly a `where_str` to | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. ->: a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Get connection info and data schema​print(str(docsearch))Filtering​You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you customized your column_map under your setting, you search with filter like this:from langchain.vectorstores import MyScale, MyScaleSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = MyScale.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.68it/s]Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.229655921459198 {'doc_id': 0} Madam Speaker, Madam... 0.24506962299346924 {'doc_id': 8} And so many families... 0.24786919355392456 {'doc_id': 1} Groups of citizens b... 0.24875116348266602 {'doc_id': 6} And I’m taking robus...Deleting your data​You can either drop the table with .drop() method or partially delete your data with .delete() method.# use directly a `where_str` to |
1,005 | .delete() method.# use directly a `where_str` to deletedocsearch.delete(where_str=f"{docsearch.metadata_column}.doc_id < 5")meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.24506962299346924 {'doc_id': 8} And so many families... 0.24875116348266602 {'doc_id': 6} And I’m taking robus... 0.26027143001556396 {'doc_id': 7} We see the unity amo... 0.26390212774276733 {'doc_id': 9} And unlike the $2 Tr...docsearch.drop()PreviousMongoDB AtlasNextNeo4j Vector IndexSetting up environmentsGet connection info and data schemaFilteringSimilarity search with scoreDeleting your dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. | MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. ->: .delete() method.# use directly a `where_str` to deletedocsearch.delete(where_str=f"{docsearch.metadata_column}.doc_id < 5")meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.24506962299346924 {'doc_id': 8} And so many families... 0.24875116348266602 {'doc_id': 6} And I’m taking robus... 0.26027143001556396 {'doc_id': 7} We see the unity amo... 0.26390212774276733 {'doc_id': 9} And unlike the $2 Tr...docsearch.drop()PreviousMongoDB AtlasNextNeo4j Vector IndexSetting up environmentsGet connection info and data schemaFilteringSimilarity search with scoreDeleting your dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,006 | NucliaDB | ü¶úÔ∏èüîó Langchain | You can use a local NucliaDB instance or use Nuclia Cloud. | You can use a local NucliaDB instance or use Nuclia Cloud. ->: NucliaDB | ü¶úÔ∏èüîó Langchain |
1,007 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesNucliaDBOn this pageNucliaDBYou can use a local NucliaDB instance or use Nuclia Cloud.When using a local instance, you need a Nuclia Understanding API key, so your texts are properly vectorized and indexed. You can get a key by creating a free account at https://nuclia.cloud, and then create a NUA key.#!pip install langchain nucliaUsage with nuclia.cloud‚Äãfrom langchain.vectorstores.nucliadb import NucliaDBAPI_KEY = "YOUR_API_KEY"ndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=False, api_key=API_KEY)Usage with a local instance‚ÄãNote: By default backend is set to http://localhost:8080.from langchain.vectorstores.nucliadb import NucliaDBndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=True, backend="http://my-local-server")Add and delete texts to your Knowledge Box‚Äãids = ndb.add_texts(["This is a new test", "This is a second test"])ndb.delete(ids=ids)Search in your Knowledge Box‚Äãresults = ndb.similarity_search("Who was inspired by Ada Lovelace?")print(res.page_content)PreviousNeo4j Vector | You can use a local NucliaDB instance or use Nuclia Cloud. | You can use a local NucliaDB instance or use Nuclia Cloud. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesNucliaDBOn this pageNucliaDBYou can use a local NucliaDB instance or use Nuclia Cloud.When using a local instance, you need a Nuclia Understanding API key, so your texts are properly vectorized and indexed. You can get a key by creating a free account at https://nuclia.cloud, and then create a NUA key.#!pip install langchain nucliaUsage with nuclia.cloud‚Äãfrom langchain.vectorstores.nucliadb import NucliaDBAPI_KEY = "YOUR_API_KEY"ndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=False, api_key=API_KEY)Usage with a local instance‚ÄãNote: By default backend is set to http://localhost:8080.from langchain.vectorstores.nucliadb import NucliaDBndb = NucliaDB(knowledge_box="YOUR_KB_ID", local=True, backend="http://my-local-server")Add and delete texts to your Knowledge Box‚Äãids = ndb.add_texts(["This is a new test", "This is a second test"])ndb.delete(ids=ids)Search in your Knowledge Box‚Äãresults = ndb.similarity_search("Who was inspired by Ada Lovelace?")print(res.page_content)PreviousNeo4j Vector |
1,008 | Vector IndexNextOpenSearchUsage with nuclia.cloudUsage with a local instanceAdd and delete texts to your Knowledge BoxSearch in your Knowledge BoxCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | You can use a local NucliaDB instance or use Nuclia Cloud. | You can use a local NucliaDB instance or use Nuclia Cloud. ->: Vector IndexNextOpenSearchUsage with nuclia.cloudUsage with a local instanceAdd and delete texts to your Knowledge BoxSearch in your Knowledge BoxCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,009 | Chroma | ü¶úÔ∏èüîó Langchain | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: Chroma | ü¶úÔ∏èüîó Langchain |
1,010 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.Install Chroma with:pip install chromadbChroma runs in various modes. See below for examples of each integrated with LangChain.in-memory - in a python script or jupyter notebookin-memory with persistance - in a script or notebook and save/load to diskin a docker container - as a server running your local machine or in the cloudLike any other database, you can: .add .get .update.upsert.delete.peekand .query runs the similarity search.View full docs at docs. To access these methods directly, you can do ._collection.method()Basic Example‚ÄãIn this basic example, we take the most recent State of the Union Address, split it into chunks, embed it using an open-source embedding model, load it into Chroma, and then query it.# importfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesChromaOn this pageChromaChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.Install Chroma with:pip install chromadbChroma runs in various modes. See below for examples of each integrated with LangChain.in-memory - in a python script or jupyter notebookin-memory with persistance - in a script or notebook and save/load to diskin a docker container - as a server running your local machine or in the cloudLike any other database, you can: .add .get .update.upsert.delete.peekand .query runs the similarity search.View full docs at docs. To access these methods directly, you can do ._collection.method()Basic Example‚ÄãIn this basic example, we take the most recent State of the Union Address, split it into chunks, embed it using an open-source embedding model, load it into Chroma, and then query it.# importfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import |
1,011 | langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it into Chromadb = Chroma.from_documents(docs, embedding_function)# query itquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)# print resultsprint(docs[0].page_content) /Users/jeff/.pyenv/versions/3.10.10/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Basic Example (including saving to disk)​Extending the previous example, if you want to save to disk, simply initialize the Chroma client and | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it into Chromadb = Chroma.from_documents(docs, embedding_function)# query itquery = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)# print resultsprint(docs[0].page_content) /Users/jeff/.pyenv/versions/3.10.10/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Basic Example (including saving to disk)​Extending the previous example, if you want to save to disk, simply initialize the Chroma client and |
1,012 | to disk, simply initialize the Chroma client and pass the directory where you want the data to be saved to. Caution: Chroma makes a best-effort to automatically save data to disk, however multiple in-memory clients can stomp each other's work. As a best practice, only have one client per path running at any given time.# save to diskdb2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")docs = db2.similarity_search(query)# load from diskdb3 = Chroma(persist_directory="./chroma_db", embedding_function=embedding_function)docs = db3.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Passing a Chroma Client into Langchain​You can also create a Chroma Client and pass it to LangChain. This is particularly useful if you want easier access to the underlying database.You can also specify the collection name that you want LangChain to use.import chromadbpersistent_client = chromadb.PersistentClient()collection = persistent_client.get_or_create_collection("collection_name")collection.add(ids=["1", "2", "3"], documents=["a", "b", "c"])langchain_chroma = Chroma( client=persistent_client, collection_name="collection_name", | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: to disk, simply initialize the Chroma client and pass the directory where you want the data to be saved to. Caution: Chroma makes a best-effort to automatically save data to disk, however multiple in-memory clients can stomp each other's work. As a best practice, only have one client per path running at any given time.# save to diskdb2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")docs = db2.similarity_search(query)# load from diskdb3 = Chroma(persist_directory="./chroma_db", embedding_function=embedding_function)docs = db3.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Passing a Chroma Client into Langchain​You can also create a Chroma Client and pass it to LangChain. This is particularly useful if you want easier access to the underlying database.You can also specify the collection name that you want LangChain to use.import chromadbpersistent_client = chromadb.PersistentClient()collection = persistent_client.get_or_create_collection("collection_name")collection.add(ids=["1", "2", "3"], documents=["a", "b", "c"])langchain_chroma = Chroma( client=persistent_client, collection_name="collection_name", |
1,013 | collection_name="collection_name", embedding_function=embedding_function,)print("There are", langchain_chroma._collection.count(), "in the collection") Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Insert of existing embedding ID: 1 Add of existing embedding ID: 2 Insert of existing embedding ID: 2 Add of existing embedding ID: 3 Insert of existing embedding ID: 3 There are 3 in the collectionBasic Example (using the Docker Container)‚ÄãYou can also run the Chroma Server in a Docker container separately, create a Client to connect to it, and then pass that to LangChain. Chroma has the ability to handle multiple Collections of documents, but the LangChain interface expects one, so we need to specify the collection name. The default collection name used by LangChain is "langchain".Here is how to clone, build, and run the Docker Image:git clone [email protected]:chroma-core/chroma.gitEdit the docker-compose.yml file and add ALLOW_RESET=TRUE under environment ... command: uvicorn chromadb.app:app --reload --workers 1 --host 0.0.0.0 --port 8000 --log-config log_config.yml environment: - IS_PERSISTENT=TRUE - ALLOW_RESET=TRUE ports: - 8000:8000 ...Then run docker-compose up -d --build# create the chroma clientimport chromadbimport uuidfrom chromadb.config import Settingsclient = chromadb.HttpClient(settings=Settings(allow_reset=True))client.reset() # resets the databasecollection = client.create_collection("my_collection")for doc in docs: collection.add( ids=[str(uuid.uuid1())], metadatas=doc.metadata, documents=doc.page_content )# tell LangChain to use our client and collection namedb4 = Chroma(client=client, collection_name="my_collection", embedding_function=embedding_function)query = "What did the president | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: collection_name="collection_name", embedding_function=embedding_function,)print("There are", langchain_chroma._collection.count(), "in the collection") Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Add of existing embedding ID: 2 Add of existing embedding ID: 3 Add of existing embedding ID: 1 Insert of existing embedding ID: 1 Add of existing embedding ID: 2 Insert of existing embedding ID: 2 Add of existing embedding ID: 3 Insert of existing embedding ID: 3 There are 3 in the collectionBasic Example (using the Docker Container)‚ÄãYou can also run the Chroma Server in a Docker container separately, create a Client to connect to it, and then pass that to LangChain. Chroma has the ability to handle multiple Collections of documents, but the LangChain interface expects one, so we need to specify the collection name. The default collection name used by LangChain is "langchain".Here is how to clone, build, and run the Docker Image:git clone [email protected]:chroma-core/chroma.gitEdit the docker-compose.yml file and add ALLOW_RESET=TRUE under environment ... command: uvicorn chromadb.app:app --reload --workers 1 --host 0.0.0.0 --port 8000 --log-config log_config.yml environment: - IS_PERSISTENT=TRUE - ALLOW_RESET=TRUE ports: - 8000:8000 ...Then run docker-compose up -d --build# create the chroma clientimport chromadbimport uuidfrom chromadb.config import Settingsclient = chromadb.HttpClient(settings=Settings(allow_reset=True))client.reset() # resets the databasecollection = client.create_collection("my_collection")for doc in docs: collection.add( ids=[str(uuid.uuid1())], metadatas=doc.metadata, documents=doc.page_content )# tell LangChain to use our client and collection namedb4 = Chroma(client=client, collection_name="my_collection", embedding_function=embedding_function)query = "What did the president |
1,014 | = "What did the president say about Ketanji Brown Jackson"docs = db4.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Update and Delete​While building toward a real application, you want to go beyond adding data, and also update and delete data. Chroma has users provide ids to simplify the bookkeeping here. ids can be the name of the file, or a combined has like filename_paragraphNumber, etc.Chroma supports all these operations - though some of them are still being integrated all the way through the LangChain interface. Additional workflow improvements will be added soon.Here is a basic example showing how to do various operations:# create simple idsids = [str(i) for i in range(1, len(docs) + 1)]# add dataexample_db = Chroma.from_documents(docs, embedding_function, ids=ids)docs = example_db.similarity_search(query)print(docs[0].metadata)# update the metadata for a documentdocs[0].metadata = { "source": "../../modules/state_of_the_union.txt", "new_value": "hello world",}example_db.update_document(ids[0], docs[0])print(example_db._collection.get(ids=[ids[0]]))# delete the last documentprint("count before", | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: = "What did the president say about Ketanji Brown Jackson"docs = db4.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Update and Delete​While building toward a real application, you want to go beyond adding data, and also update and delete data. Chroma has users provide ids to simplify the bookkeeping here. ids can be the name of the file, or a combined has like filename_paragraphNumber, etc.Chroma supports all these operations - though some of them are still being integrated all the way through the LangChain interface. Additional workflow improvements will be added soon.Here is a basic example showing how to do various operations:# create simple idsids = [str(i) for i in range(1, len(docs) + 1)]# add dataexample_db = Chroma.from_documents(docs, embedding_function, ids=ids)docs = example_db.similarity_search(query)print(docs[0].metadata)# update the metadata for a documentdocs[0].metadata = { "source": "../../modules/state_of_the_union.txt", "new_value": "hello world",}example_db.update_document(ids[0], docs[0])print(example_db._collection.get(ids=[ids[0]]))# delete the last documentprint("count before", |
1,015 | delete the last documentprint("count before", example_db._collection.count())example_db._collection.delete(ids=[ids[-1]])print("count after", example_db._collection.count()) {'source': '../../../state_of_the_union.txt'} {'ids': ['1'], 'embeddings': None, 'metadatas': [{'new_value': 'hello world', 'source': '../../../state_of_the_union.txt'}], 'documents': ['Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.']} count before 46 count after 45Use OpenAI Embeddings​Many people like to use OpenAIEmbeddings, here is how to set that up.# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassfrom langchain.embeddings.openai import OpenAIEmbeddingsOPENAI_API_KEY = getpass()import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYembeddings = OpenAIEmbeddings()new_client = chromadb.EphemeralClient()openai_lc_client = Chroma.from_documents( docs, embeddings, client=new_client, collection_name="openai_collection")query = "What did the president say about Ketanji Brown Jackson"docs = openai_lc_client.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: delete the last documentprint("count before", example_db._collection.count())example_db._collection.delete(ids=[ids[-1]])print("count after", example_db._collection.count()) {'source': '../../../state_of_the_union.txt'} {'ids': ['1'], 'embeddings': None, 'metadatas': [{'new_value': 'hello world', 'source': '../../../state_of_the_union.txt'}], 'documents': ['Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.']} count before 46 count after 45Use OpenAI Embeddings​Many people like to use OpenAIEmbeddings, here is how to set that up.# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassfrom langchain.embeddings.openai import OpenAIEmbeddingsOPENAI_API_KEY = getpass()import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYembeddings = OpenAIEmbeddings()new_client = chromadb.EphemeralClient()openai_lc_client = Chroma.from_documents( docs, embeddings, client=new_client, collection_name="openai_collection")query = "What did the president say about Ketanji Brown Jackson"docs = openai_lc_client.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the |
1,016 | Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Other Information​Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 1.1972057819366455)Retriever options​This section goes over different options for how to use Chroma as a retriever.MMR​In addition to using similarity search in the retriever object, you can | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Other Information​Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 1.1972057819366455)Retriever options​This section goes over different options for how to use Chroma as a retriever.MMR​In addition to using similarity search in the retriever object, you can |
1,017 | search in the retriever object, you can also use mmr.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Filtering on metadata​It can be helpful to narrow down the collection before working with it.For example, collections can be filtered on metadata using the get method.# filter collection for updated sourceexample_db.get(where={"source": "some_other_source"}) {'ids': [], 'embeddings': None, 'metadatas': [], 'documents': []}PreviousCassandraNextClarifaiBasic ExampleBasic Example (including saving to disk)Passing a Chroma Client into LangchainBasic Example (using the Docker Container)Update and DeleteUse OpenAI EmbeddingsOther InformationSimilarity search with scoreRetriever optionsFiltering on metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. | Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. ->: search in the retriever object, you can also use mmr.retriever = db.as_retriever(search_type="mmr")retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Filtering on metadata​It can be helpful to narrow down the collection before working with it.For example, collections can be filtered on metadata using the get method.# filter collection for updated sourceexample_db.get(where={"source": "some_other_source"}) {'ids': [], 'embeddings': None, 'metadatas': [], 'documents': []}PreviousCassandraNextClarifaiBasic ExampleBasic Example (including saving to disk)Passing a Chroma Client into LangchainBasic Example (using the Docker Container)Update and DeleteUse OpenAI EmbeddingsOther InformationSimilarity search with scoreRetriever optionsFiltering on metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,018 | Milvus | ü¶úÔ∏èüîó Langchain | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: Milvus | ü¶úÔ∏èüîó Langchain |
1,019 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMilvusOn this pageMilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.This notebook shows how to use functionality related to the Milvus vector database.To run, you should have a Milvus instance up and running.pip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMilvusOn this pageMilvusMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.This notebook shows how to use functionality related to the Milvus vector database.To run, you should have a Milvus instance up and running.pip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = |
1,020 | = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"},)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Compartmentalize the data with Milvus Collections​You can store different unrelated documents in different collections within same Milvus instance to maintain the contextHere's how you can create a new collectionvector_db = Milvus.from_documents( docs, embeddings, collection_name = 'collection_1', connection_args={"host": "127.0.0.1", "port": "19530"},)And here is how you retrieve that stored collectionvector_db = Milvus( embeddings, connection_args={"host": "127.0.0.1", "port": "19530"}, collection_name = 'collection_1' )After retreival you can go on querying it as usual.PreviousMeilisearchNextMomento Vector Index (MVI)Compartmentalize the data with Milvus CollectionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. | Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. ->: = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"},)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Compartmentalize the data with Milvus Collections​You can store different unrelated documents in different collections within same Milvus instance to maintain the contextHere's how you can create a new collectionvector_db = Milvus.from_documents( docs, embeddings, collection_name = 'collection_1', connection_args={"host": "127.0.0.1", "port": "19530"},)And here is how you retrieve that stored collectionvector_db = Milvus( embeddings, connection_args={"host": "127.0.0.1", "port": "19530"}, collection_name = 'collection_1' )After retreival you can go on querying it as usual.PreviousMeilisearchNextMomento Vector Index (MVI)Compartmentalize the data with Milvus CollectionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,021 | Annoy | ü¶úÔ∏èüîó Langchain | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: Annoy | ü¶úÔ∏èüîó Langchain |
1,022 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAnnoyOn this pageAnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.This notebook shows how to use functionality related to the Annoy vector database.NOTE: Annoy is read-only - once the index is built you cannot add any more embeddings!If you want to progressively add new entries to your VectorStore then better choose an alternative!#!pip install annoyCreate VectorStore from texts‚Äãfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.vectorstores import Annoyembeddings_func = HuggingFaceEmbeddings()texts = ["pizza is great", "I love salad", "my car", "a dog"]# default metric is angularvector_store = Annoy.from_texts(texts, embeddings_func)# allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric="angular"vector_store_v2 = | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAnnoyOn this pageAnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.This notebook shows how to use functionality related to the Annoy vector database.NOTE: Annoy is read-only - once the index is built you cannot add any more embeddings!If you want to progressively add new entries to your VectorStore then better choose an alternative!#!pip install annoyCreate VectorStore from texts‚Äãfrom langchain.embeddings import HuggingFaceEmbeddingsfrom langchain.vectorstores import Annoyembeddings_func = HuggingFaceEmbeddings()texts = ["pizza is great", "I love salad", "my car", "a dog"]# default metric is angularvector_store = Annoy.from_texts(texts, embeddings_func)# allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric="angular"vector_store_v2 = |
1,023 | n_jobs=-1, metric="angular"vector_store_v2 = Annoy.from_texts( texts, embeddings_func, metric="dot", n_trees=100, n_jobs=1)vector_store.similarity_search("food", k=3) [Document(page_content='pizza is great', metadata={}), Document(page_content='I love salad', metadata={}), Document(page_content='my car', metadata={})]# the score is a distance metric, so lower is bettervector_store.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Create VectorStore from docs​from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txtn.txtn.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: n_jobs=-1, metric="angular"vector_store_v2 = Annoy.from_texts( texts, embeddings_func, metric="dot", n_trees=100, n_jobs=1)vector_store.similarity_search("food", k=3) [Document(page_content='pizza is great', metadata={}), Document(page_content='I love salad', metadata={}), Document(page_content='my car', metadata={})]# the score is a distance metric, so lower is bettervector_store.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Create VectorStore from docs​from langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txtn.txtn.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, |
1,024 | fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world |
1,025 | \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}), Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe |
1,026 | assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})]vector_store_from_docs = Annoy.from_documents(docs, embeddings_func)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store_from_docs.similarity_search(query)print(docs[0].page_content[:100]) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights AcCreate VectorStore via existing embeddings​embs = embeddings_func.embed_documents(texts)data = list(zip(texts, embs))vector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func)vector_store_from_embeddings.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Search via embeddings​motorbike_emb = embeddings_func.embed_query("motorbike")vector_store.similarity_search_by_vector(motorbike_emb, k=3) [Document(page_content='my car', metadata={}), Document(page_content='a dog', metadata={}), Document(page_content='pizza is great', metadata={})]vector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3) [(Document(page_content='my car', metadata={}), 1.0870471000671387), (Document(page_content='a dog', metadata={}), 1.2095637321472168), (Document(page_content='pizza is great', metadata={}), 1.3254905939102173)]Search via docstore | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})]vector_store_from_docs = Annoy.from_documents(docs, embeddings_func)query = "What did the president say about Ketanji Brown Jackson"docs = vector_store_from_docs.similarity_search(query)print(docs[0].page_content[:100]) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights AcCreate VectorStore via existing embeddings​embs = embeddings_func.embed_documents(texts)data = list(zip(texts, embs))vector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func)vector_store_from_embeddings.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)]Search via embeddings​motorbike_emb = embeddings_func.embed_query("motorbike")vector_store.similarity_search_by_vector(motorbike_emb, k=3) [Document(page_content='my car', metadata={}), Document(page_content='a dog', metadata={}), Document(page_content='pizza is great', metadata={})]vector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3) [(Document(page_content='my car', metadata={}), 1.0870471000671387), (Document(page_content='a dog', metadata={}), 1.2095637321472168), (Document(page_content='pizza is great', metadata={}), 1.3254905939102173)]Search via docstore |
1,027 | 1.3254905939102173)]Search via docstore id‚Äãvector_store.index_to_docstore_id {0: '2d1498a8-a37c-4798-acb9-0016504ed798', 1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d', 2: '927f1120-985b-4691-b577-ad5cb42e011c', 3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'}some_docstore_id = 0 # texts[0]vector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]] Document(page_content='pizza is great', metadata={})# same document has distance 0vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Save and load‚Äãvector_store.save_local("my_annoy_index_and_docstore") saving configloaded_vector_store = Annoy.load_local( "my_annoy_index_and_docstore", embeddings=embeddings_func)# same document has distance 0loaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Construct from scratch‚Äãimport uuidfrom annoy import AnnoyIndexfrom langchain.docstore.document import Documentfrom langchain.docstore.in_memory import InMemoryDocstoremetadatas = [{"x": "food"}, {"x": "food"}, {"x": "stuff"}, {"x": "animal"}]# embeddingsembeddings = embeddings_func.embed_documents(texts)# embedding dimf = len(embeddings[0])# indexmetric = "angular"index = AnnoyIndex(f, metric=metric)for i, emb in enumerate(embeddings): index.add_item(i, emb)index.build(10)# docstoredocuments = []for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} documents.append(Document(page_content=text, metadata=metadata))index_to_docstore_id = {i: str(uuid.uuid4()) for i in | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: 1.3254905939102173)]Search via docstore id‚Äãvector_store.index_to_docstore_id {0: '2d1498a8-a37c-4798-acb9-0016504ed798', 1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d', 2: '927f1120-985b-4691-b577-ad5cb42e011c', 3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'}some_docstore_id = 0 # texts[0]vector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]] Document(page_content='pizza is great', metadata={})# same document has distance 0vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Save and load‚Äãvector_store.save_local("my_annoy_index_and_docstore") saving configloaded_vector_store = Annoy.load_local( "my_annoy_index_and_docstore", embeddings=embeddings_func)# same document has distance 0loaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)]Construct from scratch‚Äãimport uuidfrom annoy import AnnoyIndexfrom langchain.docstore.document import Documentfrom langchain.docstore.in_memory import InMemoryDocstoremetadatas = [{"x": "food"}, {"x": "food"}, {"x": "stuff"}, {"x": "animal"}]# embeddingsembeddings = embeddings_func.embed_documents(texts)# embedding dimf = len(embeddings[0])# indexmetric = "angular"index = AnnoyIndex(f, metric=metric)for i, emb in enumerate(embeddings): index.add_item(i, emb)index.build(10)# docstoredocuments = []for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} documents.append(Document(page_content=text, metadata=metadata))index_to_docstore_id = {i: str(uuid.uuid4()) for i in |
1,028 | = {i: str(uuid.uuid4()) for i in range(len(documents))}docstore = InMemoryDocstore( {index_to_docstore_id[i]: doc for i, doc in enumerate(documents)})db_manually = Annoy( embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id)db_manually.similarity_search_with_score("eating!", k=3) [(Document(page_content='pizza is great', metadata={'x': 'food'}), 1.1314140558242798), (Document(page_content='I love salad', metadata={'x': 'food'}), 1.1668788194656372), (Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)]PreviousAnalyticDBNextAtlasCreate VectorStore from textsCreate VectorStore from docsCreate VectorStore via existing embeddingsSearch via embeddingsSearch via docstore idSave and loadConstruct from scratchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. | Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. ->: = {i: str(uuid.uuid4()) for i in range(len(documents))}docstore = InMemoryDocstore( {index_to_docstore_id[i]: doc for i, doc in enumerate(documents)})db_manually = Annoy( embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id)db_manually.similarity_search_with_score("eating!", k=3) [(Document(page_content='pizza is great', metadata={'x': 'food'}), 1.1314140558242798), (Document(page_content='I love salad', metadata={'x': 'food'}), 1.1668788194656372), (Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)]PreviousAnalyticDBNextAtlasCreate VectorStore from textsCreate VectorStore from docsCreate VectorStore via existing embeddingsSearch via embeddingsSearch via docstore idSave and loadConstruct from scratchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,029 | Zep | ü¶úÔ∏èüîó Langchain | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: Zep | ü¶úÔ∏èüîó Langchain |
1,030 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesZepOn this pageZepZep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,
chat history memory & rich user data to your LLM app's prompts.Note: The ZepVectorStore works with Documents and is intended to be used as a Retriever.
It offers separate functionality to Zep's ZepMemory class, which is designed for persisting, enriching
and searching your user's chat history.Why Zep's VectorStore? ü§ñüöÄ‚ÄãZep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server.
The Zep client also offers async interfaces for all document operations. These two together with Zep's chat memory
functionality make Zep ideal for building conversational LLM apps where latency and performance are important.Installation‚ÄãFollow the Zep Quickstart Guide to install and get started with Zep.Usage‚ÄãYou'll need your Zep API URL and optionally an API key to use the Zep VectorStore. | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesZepOn this pageZepZep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,
chat history memory & rich user data to your LLM app's prompts.Note: The ZepVectorStore works with Documents and is intended to be used as a Retriever.
It offers separate functionality to Zep's ZepMemory class, which is designed for persisting, enriching
and searching your user's chat history.Why Zep's VectorStore? ü§ñüöÄ‚ÄãZep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server.
The Zep client also offers async interfaces for all document operations. These two together with Zep's chat memory
functionality make Zep ideal for building conversational LLM apps where latency and performance are important.Installation‚ÄãFollow the Zep Quickstart Guide to install and get started with Zep.Usage‚ÄãYou'll need your Zep API URL and optionally an API key to use the Zep VectorStore. |
1,031 | See the Zep docs for more information.In the examples below, we're using Zep's auto-embedding feature which automatically embed documents on the Zep server
using low-latency embedding models.Note‚ÄãThese examples use Zep's async interfaces. Call sync interfaces by removing the a prefix from the method names.If you pass in an Embeddings instance Zep will use this to embed documents rather than auto-embed them. | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: See the Zep docs for more information.In the examples below, we're using Zep's auto-embedding feature which automatically embed documents on the Zep server
using low-latency embedding models.Note‚ÄãThese examples use Zep's async interfaces. Call sync interfaces by removing the a prefix from the method names.If you pass in an Embeddings instance Zep will use this to embed documents rather than auto-embed them. |
1,032 | You must also set your document collection to isAutoEmbedded === false. If you set your collection to isAutoEmbedded === false, you must pass in an Embeddings instance.Load or create a Collection from documents‚Äãfrom uuid import uuid4from langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.vectorstores import ZepVectorStorefrom langchain.vectorstores.zep import CollectionConfigZEP_API_URL = "http://localhost:8000" # this is the API url of your Zep instanceZEP_API_KEY = "<optional_key>" # optional API Key for your Zep instancecollection_name = f"babbage{uuid4().hex}" # a unique collection name. alphanum only# Collection config is needed if we're creating a new Zep Collectionconfig = CollectionConfig( name=collection_name, description="<optional description>", metadata={"optional_metadata": "associated with the collection"}, is_auto_embedded=True, # we'll have Zep embed our documents using its low-latency embedder embedding_dimensions=1536 # this should match the model you've configured Zep to use.)# load the documentarticle_url = "https://www.gutenberg.org/cache/epub/71292/pg71292.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)# Instantiate the VectorStore. Since the collection does not already exist in Zep,# it will be created and populated with the documents we pass in.vs = ZepVectorStore.from_documents(docs, collection_name=collection_name, config=config, api_url=ZEP_API_URL, api_key=ZEP_API_KEY )# wait for the collection embedding to completeasync def wait_for_ready(collection_name: str) -> None: from zep_python import ZepClient | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: You must also set your document collection to isAutoEmbedded === false. If you set your collection to isAutoEmbedded === false, you must pass in an Embeddings instance.Load or create a Collection from documents‚Äãfrom uuid import uuid4from langchain.document_loaders import WebBaseLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain.vectorstores import ZepVectorStorefrom langchain.vectorstores.zep import CollectionConfigZEP_API_URL = "http://localhost:8000" # this is the API url of your Zep instanceZEP_API_KEY = "<optional_key>" # optional API Key for your Zep instancecollection_name = f"babbage{uuid4().hex}" # a unique collection name. alphanum only# Collection config is needed if we're creating a new Zep Collectionconfig = CollectionConfig( name=collection_name, description="<optional description>", metadata={"optional_metadata": "associated with the collection"}, is_auto_embedded=True, # we'll have Zep embed our documents using its low-latency embedder embedding_dimensions=1536 # this should match the model you've configured Zep to use.)# load the documentarticle_url = "https://www.gutenberg.org/cache/epub/71292/pg71292.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)# Instantiate the VectorStore. Since the collection does not already exist in Zep,# it will be created and populated with the documents we pass in.vs = ZepVectorStore.from_documents(docs, collection_name=collection_name, config=config, api_url=ZEP_API_URL, api_key=ZEP_API_KEY )# wait for the collection embedding to completeasync def wait_for_ready(collection_name: str) -> None: from zep_python import ZepClient |
1,033 | -> None: from zep_python import ZepClient import time client = ZepClient(ZEP_API_URL, ZEP_API_KEY) while True: c = await client.document.aget_collection(collection_name) print( "Embedding status: " f"{c.document_embedded_count}/{c.document_count} documents embedded" ) time.sleep(1) if c.status == "ready": breakawait wait_for_ready(collection_name) Embedding status: 0/402 documents embedded Embedding status: 0/402 documents embedded Embedding status: 402/402 documents embeddedSimarility Search Query over the Collection‚Äã# query itquery = "what is the structure of our solar system?"docs_scores = await vs.asimilarity_search_with_relevance_scores(query, k=3)# print resultsfor d, s in docs_scores: print(d.page_content, " -> ", s, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these are the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to -> 0.8998482592744614 ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect -> 0.8976143854195493 ==== the scheme of notation thus applied, immediately suggested the | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: -> None: from zep_python import ZepClient import time client = ZepClient(ZEP_API_URL, ZEP_API_KEY) while True: c = await client.document.aget_collection(collection_name) print( "Embedding status: " f"{c.document_embedded_count}/{c.document_count} documents embedded" ) time.sleep(1) if c.status == "ready": breakawait wait_for_ready(collection_name) Embedding status: 0/402 documents embedded Embedding status: 0/402 documents embedded Embedding status: 402/402 documents embeddedSimarility Search Query over the Collection‚Äã# query itquery = "what is the structure of our solar system?"docs_scores = await vs.asimilarity_search_with_relevance_scores(query, k=3)# print resultsfor d, s in docs_scores: print(d.page_content, " -> ", s, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these are the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to -> 0.8998482592744614 ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect -> 0.8976143854195493 ==== the scheme of notation thus applied, immediately suggested the |
1,034 | thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, -> 0.889982614061763 ====Search over Collection Re-ranked by MMR‚Äãquery = "what is the structure of our solar system?"docs = await vs.asearch(query, search_type="mmr", k=3)for d in docs: print(d.page_content, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to ==== the scheme of notation thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, -> 0.889982614061763 ====Search over Collection Re-ranked by MMR‚Äãquery = "what is the structure of our solar system?"docs = await vs.asearch(query, search_type="mmr", k=3)for d in docs: print(d.page_content, "\n====\n") Tables necessary to determine the places of the planets are not less necessary than those for the sun, moon, and stars. Some notion of the number and complexity of these tables may be formed, when we state that the positions of the two principal planets, (and these the most necessary for the navigator,) Jupiter and Saturn, require each not less than one hundred and sixteen tables. Yet it is not only necessary to predict the position of these bodies, but it is likewise expedient to ==== the scheme of notation thus applied, immediately suggested the advantages which must attend it as an instrument for expressing the structure, operation, and circulation of the animal system; and we entertain no doubt of its adequacy for that purpose. Not only the mechanical connexion of the solid members of the bodies of men and animals, but likewise the structure and operation of the softer parts, including the muscles, integuments, membranes, &c. the nature, motion, ==== tabulate the motions of the four satellites of Jupiter, to predict the exact times at which they enter his shadow, and at which their shadows cross his disc, as well as the times at which they are interposed between him and the Earth, and he between them and |
1,035 | him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect ====Filter by MetadataUse a metadata filter to narrow down results. First, load another book: "Adventures of Sherlock Holmes"# Let's add more content to the existing Collectionarticle_url = "https://www.gutenberg.org/files/48320/48320-0.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)await vs.aadd_documents(docs)await wait_for_ready(collection_name) Embedding status: 402/1692 documents embedded Embedding status: 402/1692 documents embedded Embedding status: 552/1692 documents embedded Embedding status: 702/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1152/1692 documents embedded Embedding status: 1302/1692 documents embedded Embedding status: 1452/1692 documents embedded Embedding status: 1602/1692 documents embedded Embedding status: 1692/1692 documents embeddedWe see results from both books. Note the source metadata‚Äãquery = "Was he interested in astronomy?"docs = await vs.asearch(query, search_type="similarity", k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") by that body to Mr Babbage:--'In no department of science, or of the arts, does this discovery promise to be so eminently useful as in that of astronomy, and its kindred sciences, with the various arts dependent on them. In none are computations more operose than those which astronomy in particular requires;--in none are preparatory facilities more needful;--in none is error more detrimental. The | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: him and the Earth, and he between them and the Earth. Among the extensive classes of tables here enumerated, there are several which are in their nature permanent and unalterable, and would never require to be recomputed, if they could once be computed with perfect ====Filter by MetadataUse a metadata filter to narrow down results. First, load another book: "Adventures of Sherlock Holmes"# Let's add more content to the existing Collectionarticle_url = "https://www.gutenberg.org/files/48320/48320-0.txt"loader = WebBaseLoader(article_url)documents = loader.load()# split it into chunkstext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)await vs.aadd_documents(docs)await wait_for_ready(collection_name) Embedding status: 402/1692 documents embedded Embedding status: 402/1692 documents embedded Embedding status: 552/1692 documents embedded Embedding status: 702/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1002/1692 documents embedded Embedding status: 1152/1692 documents embedded Embedding status: 1302/1692 documents embedded Embedding status: 1452/1692 documents embedded Embedding status: 1602/1692 documents embedded Embedding status: 1692/1692 documents embeddedWe see results from both books. Note the source metadata‚Äãquery = "Was he interested in astronomy?"docs = await vs.asearch(query, search_type="similarity", k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") by that body to Mr Babbage:--'In no department of science, or of the arts, does this discovery promise to be so eminently useful as in that of astronomy, and its kindred sciences, with the various arts dependent on them. In none are computations more operose than those which astronomy in particular requires;--in none are preparatory facilities more needful;--in none is error more detrimental. The |
1,036 | needful;--in none is error more detrimental. The practical astronomer is interrupted in his pursuit, and diverted from his task of -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ==== possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, laughing. “It was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== in all its relations; but above all, with Astronomy and Navigation. So important have they been considered, that in many instances large sums have been appropriated by the most enlightened nations in the production of them; and yet so numerous and insurmountable have been the difficulties attending the attainment of this end, that after all, even navigators, putting aside every other department of art and science, have, until very recently, been scantily and imperfectly supplied with -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ====Let's try again using a filter for only the Sherlock Holmes document.​filter = { "where": {"jsonpath": "$[*] ? (@.source == 'https://www.gutenberg.org/files/48320/48320-0.txt')"},}docs = await vs.asearch(query, search_type="similarity", metadata=filter, k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: needful;--in none is error more detrimental. The practical astronomer is interrupted in his pursuit, and diverted from his task of -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ==== possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, laughing. “It was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== in all its relations; but above all, with Astronomy and Navigation. So important have they been considered, that in many instances large sums have been appropriated by the most enlightened nations in the production of them; and yet so numerous and insurmountable have been the difficulties attending the attainment of this end, that after all, even navigators, putting aside every other department of art and science, have, until very recently, been scantily and imperfectly supplied with -> {'source': 'https://www.gutenberg.org/cache/epub/71292/pg71292.txt'} ====Let's try again using a filter for only the Sherlock Holmes document.​filter = { "where": {"jsonpath": "$[*] ? (@.source == 'https://www.gutenberg.org/files/48320/48320-0.txt')"},}docs = await vs.asearch(query, search_type="similarity", metadata=filter, k=3)for d in docs: print(d.page_content, " -> ", d.metadata, "\n====\n") possess all knowledge which is likely to be useful to him in his work, and this I have endeavored in my case to do. If I remember rightly, you on one occasion, in the early days of our friendship, defined my limits in a very precise fashion.” “Yes,” I answered, |
1,037 | precise fashion.‚Äù ‚ÄúYes,‚Äù I answered, laughing. ‚ÄúIt was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== the light shining upon his strong-set aquiline features. So he sat as I dropped off to sleep, and so he sat when a sudden ejaculation caused me to wake up, and I found the summer sun shining into the apartment. The pipe was still between his lips, the smoke still curled upward, and the room was full of a dense tobacco haze, but nothing remained of the heap of shag which I had seen upon the previous night. ‚ÄúAwake, Watson?‚Äù he asked. ‚ÄúYes.‚Äù ‚ÄúGame for a morning drive?‚Äù -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== ‚ÄúI glanced at the books upon the table, and in spite of my ignorance of German I could see that two of them were treatises on science, the others being volumes of poetry. Then I walked across to the window, hoping that I might catch some glimpse of the country-side, but an oak shutter, heavily barred, was folded across it. It was a wonderfully silent house. There was an old clock ticking loudly somewhere in the passage, but otherwise everything was deadly still. A vague feeling of -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ====PreviousXataNextZillizWhy Zep's VectorStore? ü§ñüöÄInstallationUsageNoteLoad or create a Collection from documentsSimarility Search Query over the CollectionSearch over Collection Re-ranked by MMRWe see results from both books. Note the source metadataLet's try again using a filter for only the Sherlock Holmes document.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, | Zep is an open-source long-term memory store for LLM applications. Zep makes it easy to add relevant documents, ->: precise fashion.‚Äù ‚ÄúYes,‚Äù I answered, laughing. ‚ÄúIt was a singular document. Philosophy, astronomy, and politics were marked at zero, I remember. Botany variable, geology profound as regards the mud-stains from any region -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== the light shining upon his strong-set aquiline features. So he sat as I dropped off to sleep, and so he sat when a sudden ejaculation caused me to wake up, and I found the summer sun shining into the apartment. The pipe was still between his lips, the smoke still curled upward, and the room was full of a dense tobacco haze, but nothing remained of the heap of shag which I had seen upon the previous night. ‚ÄúAwake, Watson?‚Äù he asked. ‚ÄúYes.‚Äù ‚ÄúGame for a morning drive?‚Äù -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ==== ‚ÄúI glanced at the books upon the table, and in spite of my ignorance of German I could see that two of them were treatises on science, the others being volumes of poetry. Then I walked across to the window, hoping that I might catch some glimpse of the country-side, but an oak shutter, heavily barred, was folded across it. It was a wonderfully silent house. There was an old clock ticking loudly somewhere in the passage, but otherwise everything was deadly still. A vague feeling of -> {'source': 'https://www.gutenberg.org/files/48320/48320-0.txt'} ====PreviousXataNextZillizWhy Zep's VectorStore? ü§ñüöÄInstallationUsageNoteLoad or create a Collection from documentsSimarility Search Query over the CollectionSearch over Collection Re-ranked by MMRWe see results from both books. Note the source metadataLet's try again using a filter for only the Sherlock Holmes document.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
1,038 | Marqo | ü¶úÔ∏èüîó Langchain | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: Marqo | ü¶úÔ∏èüîó Langchain |
1,039 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMarqoOn this pageMarqoThis notebook shows how to use functionality related to the Marqo vectorstore.Marqo is an open-source vector search engine. Marqo allows you to store and query multi-modal data such as text and images. Marqo creates the vectors for you using a huge selection of open-source models, you can also provide your own fine-tuned models and Marqo will handle the loading and inference for you.To run this notebook with our docker image please run the following commands first to get Marqo:docker pull marqoai/marqo:latestdocker rm -f marqodocker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:latestpip install marqofrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Marqofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesMarqoOn this pageMarqoThis notebook shows how to use functionality related to the Marqo vectorstore.Marqo is an open-source vector search engine. Marqo allows you to store and query multi-modal data such as text and images. Marqo creates the vectors for you using a huge selection of open-source models, you can also provide your own fine-tuned models and Marqo will handle the loading and inference for you.To run this notebook with our docker image please run the following commands first to get Marqo:docker pull marqoai/marqo:latestdocker rm -f marqodocker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:latestpip install marqofrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Marqofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = |
1,040 | = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)import marqo# initialize marqomarqo_url = "http://localhost:8882" # if using marqo cloud replace with your endpoint (console.marqo.ai)marqo_api_key = "" # if using marqo cloud replace with your api key (console.marqo.ai)client = marqo.Client(url=marqo_url, api_key=marqo_api_key)index_name = "langchain-demo"docsearch = Marqo.from_documents(docs, index_name=index_name)query = "What did the president say about Ketanji Brown Jackson"result_docs = docsearch.similarity_search(query) Index langchain-demo exists.print(result_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.result_docs = docsearch.similarity_search_with_score(query)print(result_docs[0][0].page_content, result_docs[0][1], sep="\n") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)import marqo# initialize marqomarqo_url = "http://localhost:8882" # if using marqo cloud replace with your endpoint (console.marqo.ai)marqo_api_key = "" # if using marqo cloud replace with your api key (console.marqo.ai)client = marqo.Client(url=marqo_url, api_key=marqo_api_key)index_name = "langchain-demo"docsearch = Marqo.from_documents(docs, index_name=index_name)query = "What did the president say about Ketanji Brown Jackson"result_docs = docsearch.similarity_search(query) Index langchain-demo exists.print(result_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.result_docs = docsearch.similarity_search_with_score(query)print(result_docs[0][0].page_content, result_docs[0][1], sep="\n") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and |
1,041 | Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 0.68647254Additional features​One of the powerful features of Marqo as a vectorstore is that you can use indexes created externally. For example:If you had a database of image and text pairs from another application, you can simply just use it in langchain with the Marqo vectorstore. Note that bringing your own multimodal indexes will disable the add_texts method.If you had a database of text documents, you can bring it into the langchain framework and add more texts through add_texts.The documents that are returned are customised by passing your own function to the page_content_builder callback in the search methods.Multimodal Example​# use a new indexindex_name = "langchain-multimodal-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemsettings = {"treat_urls_and_pointers_as_images": True, "model": "ViT-L/14"}client.create_index(index_name, **settings)client.index(index_name).add_documents( [ # image of a bus { "caption": "Bus", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg", }, # image of a plane { "caption": "Plane", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg", }, ],) {'errors': False, 'processingTimeMs': 2090.2822139996715, | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 0.68647254Additional features​One of the powerful features of Marqo as a vectorstore is that you can use indexes created externally. For example:If you had a database of image and text pairs from another application, you can simply just use it in langchain with the Marqo vectorstore. Note that bringing your own multimodal indexes will disable the add_texts method.If you had a database of text documents, you can bring it into the langchain framework and add more texts through add_texts.The documents that are returned are customised by passing your own function to the page_content_builder callback in the search methods.Multimodal Example​# use a new indexindex_name = "langchain-multimodal-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemsettings = {"treat_urls_and_pointers_as_images": True, "model": "ViT-L/14"}client.create_index(index_name, **settings)client.index(index_name).add_documents( [ # image of a bus { "caption": "Bus", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg", }, # image of a plane { "caption": "Plane", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg", }, ],) {'errors': False, 'processingTimeMs': 2090.2822139996715, |
1,042 | 'processingTimeMs': 2090.2822139996715, 'index_name': 'langchain-multimodal-demo', 'items': [{'_id': 'aa92fc1c-1fb2-4d86-b027-feb507c419f7', 'result': 'created', 'status': 201}, {'_id': '5142c258-ef9f-4bf2-a1a6-2307280173a0', 'result': 'created', 'status': 201}]}def get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" return f"{res['caption']}: {res['image']}"docsearch = Marqo(client, index_name, page_content_builder=get_content)query = "vehicles that fly"doc_results = docsearch.similarity_search(query)for doc in doc_results: print(doc.page_content) Plane: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg Bus: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpgText only example‚Äã# use a new indexindex_name = "langchain-byo-index-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemclient.create_index(index_name)client.index(index_name).add_documents( [ { "Title": "Smartphone", "Description": "A smartphone is a portable computer device that combines mobile telephone " "functions and computing functions into one unit.", }, { "Title": "Telephone", "Description": "A telephone is a telecommunications device that permits two or more users to" "conduct a conversation when they are too far apart to be easily heard directly.", }, ],) {'errors': False, 'processingTimeMs': 139.2144540004665, 'index_name': 'langchain-byo-index-demo', 'items': [{'_id': '27c05a1c-b8a9-49a5-ae73-fbf1eb51dc3f', 'result': 'created', 'status': 201}, {'_id': '6889afe0-e600-43c1-aa3b-1d91bf6db274', 'result': 'created', 'status': 201}]}# Note | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: 'processingTimeMs': 2090.2822139996715, 'index_name': 'langchain-multimodal-demo', 'items': [{'_id': 'aa92fc1c-1fb2-4d86-b027-feb507c419f7', 'result': 'created', 'status': 201}, {'_id': '5142c258-ef9f-4bf2-a1a6-2307280173a0', 'result': 'created', 'status': 201}]}def get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" return f"{res['caption']}: {res['image']}"docsearch = Marqo(client, index_name, page_content_builder=get_content)query = "vehicles that fly"doc_results = docsearch.similarity_search(query)for doc in doc_results: print(doc.page_content) Plane: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg Bus: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpgText only example‚Äã# use a new indexindex_name = "langchain-byo-index-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemclient.create_index(index_name)client.index(index_name).add_documents( [ { "Title": "Smartphone", "Description": "A smartphone is a portable computer device that combines mobile telephone " "functions and computing functions into one unit.", }, { "Title": "Telephone", "Description": "A telephone is a telecommunications device that permits two or more users to" "conduct a conversation when they are too far apart to be easily heard directly.", }, ],) {'errors': False, 'processingTimeMs': 139.2144540004665, 'index_name': 'langchain-byo-index-demo', 'items': [{'_id': '27c05a1c-b8a9-49a5-ae73-fbf1eb51dc3f', 'result': 'created', 'status': 201}, {'_id': '6889afe0-e600-43c1-aa3b-1d91bf6db274', 'result': 'created', 'status': 201}]}# Note |
1,043 | 'result': 'created', 'status': 201}]}# Note text indexes retain the ability to use add_texts despite different field names in documents# this is because the page_content_builder callback lets you handle these document fields as requireddef get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" if "text" in res: return res["text"] return res["Description"]docsearch = Marqo(client, index_name, page_content_builder=get_content)docsearch.add_texts(["This is a document that is about elephants"]) ['9986cc72-adcd-4080-9d74-265c173a9ec3']query = "modern communications devices"doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = "elephants"doc_results = docsearch.similarity_search(query, page_content_builder=get_content)print(doc_results[0].page_content) This is a document that is about elephantsWeighted Queries‚ÄãWe also expose marqos weighted queries which are a powerful way to compose complex semantic searches.query = {"communications devices": 1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = {"communications devices": 1.0, "technology post 2000": -1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A telephone is a telecommunications device that permits two or more users toconduct a conversation when they are too far apart to be easily heard directly.Question Answering with SourcesThis section shows how to use Marqo as part of a RetrievalQAWithSourcesChain. Marqo will perform the searches for information in the sources.from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIimport osimport | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: 'result': 'created', 'status': 201}]}# Note text indexes retain the ability to use add_texts despite different field names in documents# this is because the page_content_builder callback lets you handle these document fields as requireddef get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" if "text" in res: return res["text"] return res["Description"]docsearch = Marqo(client, index_name, page_content_builder=get_content)docsearch.add_texts(["This is a document that is about elephants"]) ['9986cc72-adcd-4080-9d74-265c173a9ec3']query = "modern communications devices"doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = "elephants"doc_results = docsearch.similarity_search(query, page_content_builder=get_content)print(doc_results[0].page_content) This is a document that is about elephantsWeighted Queries‚ÄãWe also expose marqos weighted queries which are a powerful way to compose complex semantic searches.query = {"communications devices": 1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = {"communications devices": 1.0, "technology post 2000": -1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A telephone is a telecommunications device that permits two or more users toconduct a conversation when they are too far apart to be easily heard directly.Question Answering with SourcesThis section shows how to use Marqo as part of a RetrievalQAWithSourcesChain. Marqo will perform the searches for information in the sources.from langchain.chains import RetrievalQAWithSourcesChainfrom langchain.llms import OpenAIimport osimport |
1,044 | langchain.llms import OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········with open("../../modules/state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)index_name = "langchain-qa-with-retrieval"docsearch = Marqo.from_documents(docs, index_name=index_name) Index langchain-qa-with-retrieval exists.chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': ' The president honored Justice Breyer, thanking him for his service and noting that he is a retiring Justice of the United States Supreme Court.\n', 'sources': '../../../state_of_the_union.txt'}PreviousLLMRailsNextGoogle Vertex AI MatchingEngineAdditional featuresWeighted QueriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook shows how to use functionality related to the Marqo vectorstore. | This notebook shows how to use functionality related to the Marqo vectorstore. ->: langchain.llms import OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········with open("../../modules/state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)index_name = "langchain-qa-with-retrieval"docsearch = Marqo.from_documents(docs, index_name=index_name) Index langchain-qa-with-retrieval exists.chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': ' The president honored Justice Breyer, thanking him for his service and noting that he is a retiring Justice of the United States Supreme Court.\n', 'sources': '../../../state_of_the_union.txt'}PreviousLLMRailsNextGoogle Vertex AI MatchingEngineAdditional featuresWeighted QueriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,045 | Azure Cosmos DB | ü¶úÔ∏èüîó Langchain | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: Azure Cosmos DB | ü¶úÔ∏èüîó Langchain |
1,046 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAzure Cosmos DBAzure Cosmos DBAzure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support.
You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAzure Cosmos DBAzure Cosmos DBAzure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support.
You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string. |
1,047 | Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.This notebook shows you how to leverage the Vector Search capabilities within Azure Cosmos DB for Mongo vCore to store documents in collections, create indicies and perform vector search queries using approximate nearest neighbor algorithms such as COS (cosine distance), L2 (Euclidean distance), and IP (inner product) to locate documents close to the query vectors. Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.Sign Up for free to get started today.pip install pymongo Requirement already satisfied: pymongo in /Users/iekpo/Langchain/langchain-python/.venv/lib/python3.10/site-packages (4.5.0) Requirement already satisfied: dnspython<3.0.0,>=1.16.0 in /Users/iekpo/Langchain/langchain-python/.venv/lib/python3.10/site-packages (from pymongo) (2.4.2)import osimport getpassCONNECTION_STRING = "AZURE COSMOS DB MONGO vCORE connection string"INDEX_NAME = "izzy-test-index"NAMESPACE = "izzy_test_db.izzy_test_collection"DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")We want to use OpenAIEmbeddings so we need to set up our Azure OpenAI API Key alongside other environment variables. # Set up the OpenAI Environment Variablesos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "YOUR_OPEN_AI_ENDPOINT" # https://example.openai.azure.com/os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_KEY"os.environ["OPENAI_EMBEDDINGS_DEPLOYMENT"] = "smart-agent-embedding-ada" # the deployment name for the embedding | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.This notebook shows you how to leverage the Vector Search capabilities within Azure Cosmos DB for Mongo vCore to store documents in collections, create indicies and perform vector search queries using approximate nearest neighbor algorithms such as COS (cosine distance), L2 (Euclidean distance), and IP (inner product) to locate documents close to the query vectors. Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.Sign Up for free to get started today.pip install pymongo Requirement already satisfied: pymongo in /Users/iekpo/Langchain/langchain-python/.venv/lib/python3.10/site-packages (4.5.0) Requirement already satisfied: dnspython<3.0.0,>=1.16.0 in /Users/iekpo/Langchain/langchain-python/.venv/lib/python3.10/site-packages (from pymongo) (2.4.2)import osimport getpassCONNECTION_STRING = "AZURE COSMOS DB MONGO vCORE connection string"INDEX_NAME = "izzy-test-index"NAMESPACE = "izzy_test_db.izzy_test_collection"DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")We want to use OpenAIEmbeddings so we need to set up our Azure OpenAI API Key alongside other environment variables. # Set up the OpenAI Environment Variablesos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "YOUR_OPEN_AI_ENDPOINT" # https://example.openai.azure.com/os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_KEY"os.environ["OPENAI_EMBEDDINGS_DEPLOYMENT"] = "smart-agent-embedding-ada" # the deployment name for the embedding |
1,048 | # the deployment name for the embedding modelos.environ["OPENAI_EMBEDDINGS_MODEL_NAME"] = "text-embedding-ada-002" # the model nameNow, we need to load the documents into the collection, create the index and then run our queries against the index to retrieve matches.Please refer to the documentation if you have questions about certain parametersfrom langchain.docstore.document import Documentfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema.embeddings import Embeddingsfrom langchain.vectorstores.azure_cosmos_db_vector_search import AzureCosmosDBVectorSearch, CosmosDBSimilarityTypefrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderSOURCE_FILE_NAME = "../../modules/state_of_the_union.txt"loader = TextLoader(SOURCE_FILE_NAME)documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# OpenAI Settingsmodel_deployment = os.getenv("OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada")model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")openai_embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model_deployment, model=model_name, chunk_size=1)from pymongo import MongoClientINDEX_NAME = "izzy-test-index-2"NAMESPACE = "izzy_test_db.izzy_test_collection"DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")client: MongoClient = MongoClient(CONNECTION_STRING)collection = client[DB_NAME][COLLECTION_NAME]model_deployment = os.getenv("OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada")model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")vectorstore = AzureCosmosDBVectorSearch.from_documents( docs, openai_embeddings, collection=collection, index_name=INDEX_NAME,)num_lists = 100dimensions = 1536similarity_algorithm = CosmosDBSimilarityType.COSvectorstore.create_index(num_lists, dimensions, similarity_algorithm) {'raw': | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: # the deployment name for the embedding modelos.environ["OPENAI_EMBEDDINGS_MODEL_NAME"] = "text-embedding-ada-002" # the model nameNow, we need to load the documents into the collection, create the index and then run our queries against the index to retrieve matches.Please refer to the documentation if you have questions about certain parametersfrom langchain.docstore.document import Documentfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.schema.embeddings import Embeddingsfrom langchain.vectorstores.azure_cosmos_db_vector_search import AzureCosmosDBVectorSearch, CosmosDBSimilarityTypefrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderSOURCE_FILE_NAME = "../../modules/state_of_the_union.txt"loader = TextLoader(SOURCE_FILE_NAME)documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)# OpenAI Settingsmodel_deployment = os.getenv("OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada")model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")openai_embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model_deployment, model=model_name, chunk_size=1)from pymongo import MongoClientINDEX_NAME = "izzy-test-index-2"NAMESPACE = "izzy_test_db.izzy_test_collection"DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")client: MongoClient = MongoClient(CONNECTION_STRING)collection = client[DB_NAME][COLLECTION_NAME]model_deployment = os.getenv("OPENAI_EMBEDDINGS_DEPLOYMENT", "smart-agent-embedding-ada")model_name = os.getenv("OPENAI_EMBEDDINGS_MODEL_NAME", "text-embedding-ada-002")vectorstore = AzureCosmosDBVectorSearch.from_documents( docs, openai_embeddings, collection=collection, index_name=INDEX_NAME,)num_lists = 100dimensions = 1536similarity_algorithm = CosmosDBSimilarityType.COSvectorstore.create_index(num_lists, dimensions, similarity_algorithm) {'raw': |
1,049 | dimensions, similarity_algorithm) {'raw': {'defaultShard': {'numIndexesBefore': 2, 'numIndexesAfter': 3, 'createdCollectionAutomatically': False, 'ok': 1}}, 'ok': 1}# perform a similarity search between the embedding of the query and the embeddings of the documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Once the documents have been loaded and the index has been created, you can now instantiate the vector store directly and run queries against the indexvectorstore = AzureCosmosDBVectorSearch.from_connection_string(CONNECTION_STRING, NAMESPACE, openai_embeddings, index_name=INDEX_NAME)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: dimensions, similarity_algorithm) {'raw': {'defaultShard': {'numIndexesBefore': 2, 'numIndexesAfter': 3, 'createdCollectionAutomatically': False, 'ok': 1}}, 'ok': 1}# perform a similarity search between the embedding of the query and the embeddings of the documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Once the documents have been loaded and the index has been created, you can now instantiate the vector store directly and run queries against the indexvectorstore = AzureCosmosDBVectorSearch.from_connection_string(CONNECTION_STRING, NAMESPACE, openai_embeddings, index_name=INDEX_NAME)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has |
1,050 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.vectorstore = AzureCosmosDBVectorSearch(collection, openai_embeddings, index_name=INDEX_NAME)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousAwaDBNextAzure Cognitive SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. | Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. ->: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.vectorstore = AzureCosmosDBVectorSearch(collection, openai_embeddings, index_name=INDEX_NAME)# perform a similarity search between a query and the ingested documentsquery = "What did the president say about Ketanji Brown Jackson"docs = vectorstore.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousAwaDBNextAzure Cognitive SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,051 | ClickHouse | ü¶úÔ∏èüîó Langchain | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. ->: ClickHouse | ü¶úÔ∏èüîó Langchain |
1,052 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesClickHouseOn this pageClickHouseClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.This notebook shows how to use functionality related to the ClickHouse vector search.Setting up envrionments‚ÄãSetting up local clickhouse server with docker (optional)docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11Setup up clickhouse client driverpip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassif not os.environ["OPENAI_API_KEY"]: | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesClickHouseOn this pageClickHouseClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.This notebook shows how to use functionality related to the ClickHouse vector search.Setting up envrionments‚ÄãSetting up local clickhouse server with docker (optional)docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11Setup up clickhouse client driverpip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassif not os.environ["OPENAI_API_KEY"]: |
1,053 | getpassif not os.environ["OPENAI_API_KEY"]: os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}settings = ClickhouseSettings(table="clickhouse_vector_search_example")docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Get connection info and data schema​print(str(docsearch)) default.clickhouse_vector_search_example @ localhost:8123 username: None Table Schema: | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. ->: getpassif not os.environ["OPENAI_API_KEY"]: os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}settings = ClickhouseSettings(table="clickhouse_vector_search_example")docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Get connection info and data schema​print(str(docsearch)) default.clickhouse_vector_search_example @ localhost:8123 username: None Table Schema: |
1,054 | username: None Table Schema: --------------------------------------------------- |id |Nullable(String) | |document |Nullable(String) | |embedding |Array(Float32) | |metadata |Object('json') | |uuid |UUID | --------------------------------------------------- Clickhouse table schema‚ÄãClickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.print(f"Clickhouse Table DDL:\n\n{docsearch.schema}") Clickhouse Table DDL: CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example( id Nullable(String), document Nullable(String), embedding Array(Float32), metadata JSON, uuid UUID DEFAULT generateUUIDv4(), CONSTRAINT cons_vec_len CHECK length(embedding) = 1536, INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000 ) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192Filtering‚ÄãYou can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you custimized your column_map under your setting, you search with filter like this:from langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = Clickhouse.from_documents(docs, embeddings) | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. ->: username: None Table Schema: --------------------------------------------------- |id |Nullable(String) | |document |Nullable(String) | |embedding |Array(Float32) | |metadata |Object('json') | |uuid |UUID | --------------------------------------------------- Clickhouse table schema‚ÄãClickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.print(f"Clickhouse Table DDL:\n\n{docsearch.schema}") Clickhouse Table DDL: CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example( id Nullable(String), document Nullable(String), embedding Array(Float32), metadata JSON, uuid UUID DEFAULT generateUUIDv4(), CONSTRAINT cons_vec_len CHECK length(embedding) = 1536, INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000 ) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192Filtering‚ÄãYou can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you custimized your column_map under your setting, you search with filter like this:from langchain.vectorstores import Clickhouse, ClickhouseSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = Clickhouse.from_documents(docs, embeddings) |
1,055 | = Clickhouse.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s]meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam... 0.6997970363474885 {'doc_id': 8} And so many families... 0.7044504914336727 {'doc_id': 1} Groups of citizens b... 0.7053558702165094 {'doc_id': 6} And I’m taking robus...Deleting your data​docsearch.drop()PreviousClarifaiNextDashVectorSetting up envrionmentsGet connection info and data schemaClickhouse table schemaFilteringDeleting your dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. | ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. ->: = Clickhouse.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s]meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...") 0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam... 0.6997970363474885 {'doc_id': 8} And so many families... 0.7044504914336727 {'doc_id': 1} Groups of citizens b... 0.7053558702165094 {'doc_id': 6} And I’m taking robus...Deleting your data​docsearch.drop()PreviousClarifaiNextDashVectorSetting up envrionmentsGet connection info and data schemaClickhouse table schemaFilteringDeleting your dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,056 | BagelDB | ü¶úÔ∏èüîó Langchain | BagelDB (Open Vector Database for AI), is like GitHub for AI data. | BagelDB (Open Vector Database for AI), is like GitHub for AI data. ->: BagelDB | ü¶úÔ∏èüîó Langchain |
1,057 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesBagelDBOn this pageBagelDBBagelDB (Open Vector Database for AI), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers, | BagelDB (Open Vector Database for AI), is like GitHub for AI data. | BagelDB (Open Vector Database for AI), is like GitHub for AI data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesBagelDBOn this pageBagelDBBagelDB (Open Vector Database for AI), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers, |
1,058 | internal collaborations for enterprises, and public contributions for data DAOs.Installation and Setup‚Äãpip install betabageldbCreate VectorStore from texts‚Äãfrom langchain.vectorstores import Bageltexts = ["hello bagel", "hello langchain", "I love salad", "my car", "a dog"]# create cluster and add textscluster = Bagel.from_texts(cluster_name="testing", texts=texts)# similarity searchcluster.similarity_search("bagel", k=3) [Document(page_content='hello bagel', metadata={}), Document(page_content='my car', metadata={}), Document(page_content='I love salad', metadata={})]# the score is a distance metric, so lower is bettercluster.similarity_search_with_score("bagel", k=3) [(Document(page_content='hello bagel', metadata={}), 0.27392977476119995), (Document(page_content='my car', metadata={}), 1.4783176183700562), (Document(page_content='I love salad', metadata={}), 1.5342965126037598)]# delete the clustercluster.delete_cluster()Create VectorStore from docs‚Äãfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)[:10]# create cluster with docscluster = Bagel.from_documents(cluster_name="testing_with_docs", documents=docs)# similarity searchquery = "What did the president say about Ketanji Brown Jackson"docs = cluster.similarity_search(query)print(docs[0].page_content[:102]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Get all text/doc from Cluster‚Äãtexts = ["hello bagel", "this is langchain"]cluster = Bagel.from_texts(cluster_name="testing", texts=texts)cluster_data = cluster.get()# all keyscluster_data.keys() dict_keys(['ids', 'embeddings', 'metadatas', 'documents'])# all values and keyscluster_data {'ids': | BagelDB (Open Vector Database for AI), is like GitHub for AI data. | BagelDB (Open Vector Database for AI), is like GitHub for AI data. ->: internal collaborations for enterprises, and public contributions for data DAOs.Installation and Setup‚Äãpip install betabageldbCreate VectorStore from texts‚Äãfrom langchain.vectorstores import Bageltexts = ["hello bagel", "hello langchain", "I love salad", "my car", "a dog"]# create cluster and add textscluster = Bagel.from_texts(cluster_name="testing", texts=texts)# similarity searchcluster.similarity_search("bagel", k=3) [Document(page_content='hello bagel', metadata={}), Document(page_content='my car', metadata={}), Document(page_content='I love salad', metadata={})]# the score is a distance metric, so lower is bettercluster.similarity_search_with_score("bagel", k=3) [(Document(page_content='hello bagel', metadata={}), 0.27392977476119995), (Document(page_content='my car', metadata={}), 1.4783176183700562), (Document(page_content='I love salad', metadata={}), 1.5342965126037598)]# delete the clustercluster.delete_cluster()Create VectorStore from docs‚Äãfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)[:10]# create cluster with docscluster = Bagel.from_documents(cluster_name="testing_with_docs", documents=docs)# similarity searchquery = "What did the president say about Ketanji Brown Jackson"docs = cluster.similarity_search(query)print(docs[0].page_content[:102]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Get all text/doc from Cluster‚Äãtexts = ["hello bagel", "this is langchain"]cluster = Bagel.from_texts(cluster_name="testing", texts=texts)cluster_data = cluster.get()# all keyscluster_data.keys() dict_keys(['ids', 'embeddings', 'metadatas', 'documents'])# all values and keyscluster_data {'ids': |
1,059 | all values and keyscluster_data {'ids': ['578c6d24-3763-11ee-a8ab-b7b7b34f99ba', '578c6d25-3763-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d8-3762-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d9-3762-11ee-a8ab-b7b7b34f99ba', '6b40881a-3762-11ee-a8ab-b7b7b34f99ba', '6b40881b-3762-11ee-a8ab-b7b7b34f99ba', '581e691e-3762-11ee-a8ab-b7b7b34f99ba', '581e691f-3762-11ee-a8ab-b7b7b34f99ba'], 'embeddings': None, 'metadatas': [{}, {}, {}, {}, {}, {}, {}, {}], 'documents': ['hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain']}cluster.delete_cluster()Create cluster with metadata & filter using metadata​texts = ["hello bagel", "this is langchain"]metadatas = [{"source": "notion"}, {"source": "google"}]cluster = Bagel.from_texts(cluster_name="testing", texts=texts, metadatas=metadatas)cluster.similarity_search_with_score("hello bagel", where={"source": "notion"}) [(Document(page_content='hello bagel', metadata={'source': 'notion'}), 0.0)]# delete the clustercluster.delete_cluster()PreviousAzure Cognitive SearchNextCassandraInstallation and SetupCreate VectorStore from textsCreate VectorStore from docsGet all text/doc from ClusterCreate cluster with metadata & filter using metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | BagelDB (Open Vector Database for AI), is like GitHub for AI data. | BagelDB (Open Vector Database for AI), is like GitHub for AI data. ->: all values and keyscluster_data {'ids': ['578c6d24-3763-11ee-a8ab-b7b7b34f99ba', '578c6d25-3763-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d8-3762-11ee-a8ab-b7b7b34f99ba', 'fb2fc7d9-3762-11ee-a8ab-b7b7b34f99ba', '6b40881a-3762-11ee-a8ab-b7b7b34f99ba', '6b40881b-3762-11ee-a8ab-b7b7b34f99ba', '581e691e-3762-11ee-a8ab-b7b7b34f99ba', '581e691f-3762-11ee-a8ab-b7b7b34f99ba'], 'embeddings': None, 'metadatas': [{}, {}, {}, {}, {}, {}, {}, {}], 'documents': ['hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain', 'hello bagel', 'this is langchain']}cluster.delete_cluster()Create cluster with metadata & filter using metadata​texts = ["hello bagel", "this is langchain"]metadatas = [{"source": "notion"}, {"source": "google"}]cluster = Bagel.from_texts(cluster_name="testing", texts=texts, metadatas=metadatas)cluster.similarity_search_with_score("hello bagel", where={"source": "notion"}) [(Document(page_content='hello bagel', metadata={'source': 'notion'}), 0.0)]# delete the clustercluster.delete_cluster()PreviousAzure Cognitive SearchNextCassandraInstallation and SetupCreate VectorStore from textsCreate VectorStore from docsGet all text/doc from ClusterCreate cluster with metadata & filter using metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,060 | Dingo | ü¶úÔ∏èüîó Langchain | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. ->: Dingo | ü¶úÔ∏èüîó Langchain |
1,061 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesDingoOn this pageDingoDingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.This notebook shows how to use functionality related to the DingoDB vector database.To run, you should have a DingoDB instance up and running.pip install dingodbor install latest:pip install git+https://[email protected]/dingodb/pydingo.gitWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesDingoOn this pageDingoDingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.This notebook shows how to use functionality related to the DingoDB vector database.To run, you should have a DingoDB instance up and running.pip install dingodbor install latest:pip install git+https://[email protected]/dingodb/pydingo.gitWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import |
1,062 | Dingofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from dingodb import DingoDBindex_name = "langchain-demo"dingo_client = DingoDB(user="", password="", host=["127.0.0.1:13000"])# First, check if our index already exists. If it doesn't, we create itif index_name not in dingo_client.get_index(): # we create a new index, modify to your own dingo_client.create_index( index_name=index_name, dimension=1536, metric_type='cosine', auto_id=False)# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Dingo.from_documents(docs, embeddings, client=dingo_client, index_name=index_name)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import TextLoaderquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing Index‚ÄãMore text can embedded and upserted to an existing Dingo index using the add_texts functionvectorstore = Dingo(embeddings, "text", client=dingo_client, index_name=index_name)vectorstore.add_texts(["More text!"])Maximal Marginal Relevance Searches‚ÄãIn addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. ->: Dingofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()from dingodb import DingoDBindex_name = "langchain-demo"dingo_client = DingoDB(user="", password="", host=["127.0.0.1:13000"])# First, check if our index already exists. If it doesn't, we create itif index_name not in dingo_client.get_index(): # we create a new index, modify to your own dingo_client.create_index( index_name=index_name, dimension=1536, metric_type='cosine', auto_id=False)# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Dingo.from_documents(docs, embeddings, client=dingo_client, index_name=index_name)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Dingofrom langchain.document_loaders import TextLoaderquery = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing Index‚ÄãMore text can embedded and upserted to an existing Dingo index using the add_texts functionvectorstore = Dingo(embeddings, "text", client=dingo_client, index_name=index_name)vectorstore.add_texts(["More text!"])Maximal Marginal Relevance Searches‚ÄãIn addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in |
1,063 | k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousDashVectorNextDocArray HnswSearchAdding More Text to an Existing IndexMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. | Dingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data. ->: k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousDashVectorNextDocArray HnswSearchAdding More Text to an Existing IndexMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,064 | sqlite-vss | ü¶úÔ∏èüîó Langchain | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. ->: sqlite-vss | ü¶úÔ∏èüîó Langchain |
1,065 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storessqlite-vssOn this pagesqlite-vsssqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities.This notebook shows how to use the SQLiteVSS vector database.# You need to install sqlite-vss as a dependency.%pip install sqlite-vssQuickstart‚Äãfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storessqlite-vssOn this pagesqlite-vsssqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities.This notebook shows how to use the SQLiteVSS vector database.# You need to install sqlite-vss as a dependency.%pip install sqlite-vssQuickstart‚Äãfrom langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding |
1,066 | doc in docs]# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it in sqlite-vss in a table named state_union.# the db_file parameter is the name of the file you want# as your sqlite database.db = SQLiteVSS.from_texts( texts=texts, embedding=embedding_function, table="state_union", db_file="/tmp/vss.db")# query itquery = "What did the president say about Ketanji Brown Jackson"data = db.similarity_search(query)# print resultsdata[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Using existing sqlite connection​from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding functionembedding_function = | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. ->: doc in docs]# create the open-source embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")# load it in sqlite-vss in a table named state_union.# the db_file parameter is the name of the file you want# as your sqlite database.db = SQLiteVSS.from_texts( texts=texts, embedding=embedding_function, table="state_union", db_file="/tmp/vss.db")# query itquery = "What did the president say about Ketanji Brown Jackson"data = db.similarity_search(query)# print resultsdata[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'Using existing sqlite connection​from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import SQLiteVSSfrom langchain.document_loaders import TextLoader# load the document and split it into chunksloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()# split it into chunkstext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)texts = [doc.page_content for doc in docs]# create the open-source embedding functionembedding_function = |
1,067 | embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")connection = SQLiteVSS.create_connection(db_file="/tmp/vss.db")db1 = SQLiteVSS( table="state_union", embedding=embedding_function, connection=connection)db1.add_texts(["Ketanji Brown Jackson is awesome"])# query it againquery = "What did the president say about Ketanji Brown Jackson"data = db1.similarity_search(query)# print resultsdata[0].page_content 'Ketanji Brown Jackson is awesome'# Cleaning upimport osos.remove("/tmp/vss.db")Previousscikit-learnNextStarRocksQuickstartUsing existing sqlite connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. | sqlite-vss is an SQLite extension designed for vector search, emphasizing local-first operations and easy integration into applications without external servers. Leveraging the Faiss library, it offers efficient similarity search and clustering capabilities. ->: embedding functionembedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")connection = SQLiteVSS.create_connection(db_file="/tmp/vss.db")db1 = SQLiteVSS( table="state_union", embedding=embedding_function, connection=connection)db1.add_texts(["Ketanji Brown Jackson is awesome"])# query it againquery = "What did the president say about Ketanji Brown Jackson"data = db1.similarity_search(query)# print resultsdata[0].page_content 'Ketanji Brown Jackson is awesome'# Cleaning upimport osos.remove("/tmp/vss.db")Previousscikit-learnNextStarRocksQuickstartUsing existing sqlite connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,068 | Vectara | ü¶úÔ∏èüîó Langchain | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: Vectara | ü¶úÔ∏èüîó Langchain |
1,069 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesVectaraOn this pageVectaraVectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
See the Vectara API documentation for more information on how to use the API.This notebook shows how to use functionality related to the Vectara's integration with langchain. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesVectaraOn this pageVectaraVectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
See the Vectara API documentation for more information on how to use the API.This notebook shows how to use functionality related to the Vectara's integration with langchain. |
1,070 | Note that unlike many other integrations in this category, Vectara provides an end-to-end managed service for Grounded Generation (aka retrieval augmented generation), which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the Vectara internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide):Sign up for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the "Create Corpus" button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.Next you'll need to create API keys to access the corpus. Click on the "Authorization" tab in the corpus view and then the "Create API Key" button. Give your key a name, and choose whether you want query only or query+index for your key. Click "Create" and you now have an active API key. Keep this key confidential. To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: Note that unlike many other integrations in this category, Vectara provides an end-to-end managed service for Grounded Generation (aka retrieval augmented generation), which includes:A way to extract text from document files and chunk them into sentences.Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the Vectara internal vector storeA query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search)All of these are supported in this LangChain integration.SetupYou will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide):Sign up for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the "Create Corpus" button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.Next you'll need to create API keys to access the corpus. Click on the "Authorization" tab in the corpus view and then the "Create API Key" button. Give your key a name, and choose whether you want query only or query+index for your key. Click "Create" and you now have an active API key. Keep this key confidential. To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key. |
1,071 | You can provide those to LangChain in two ways:Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")Provide them as arguments when creating the Vectara vectorstore object:vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )Connecting to Vectara from LangChain‚ÄãIn this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.The corpus has 3 fields defined as metadata for filtering:url: a string field containing the source URL of the document (where relevant)speech: a string field containing the name of the speechauthor: the name of the authorLet's start by ingesting 3 documents into the corpus:The State of the Union speech from 2022, available in the LangChain repository as a text fileThe "I have a dream" speech by Dr. KindThe "We shall Fight on the Beaches" speech by Winston Churchilfrom langchain.embeddings import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfoloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: You can provide those to LangChain in two ways:Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.For example, you can set these variables using os.environ and getpass as follows:import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")Provide them as arguments when creating the Vectara vectorstore object:vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )Connecting to Vectara from LangChain‚ÄãIn this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.The corpus has 3 fields defined as metadata for filtering:url: a string field containing the source URL of the document (where relevant)speech: a string field containing the name of the speechauthor: the name of the authorLet's start by ingesting 3 documents into the corpus:The State of the Union speech from 2022, available in the LangChain repository as a text fileThe "I have a dream" speech by Dr. KindThe "We shall Fight on the Beaches" speech by Winston Churchilfrom langchain.embeddings import FakeEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Vectarafrom langchain.document_loaders import TextLoaderfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfoloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = |
1,072 | = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vectara = Vectara.from_documents( docs, embedding=FakeEmbeddings(size=768), doc_metadata={"speech": "state-of-the-union", "author": "Biden"},)Vectara's indexing API provides a file upload API where the file is handled directly by Vectara - pre-processed, chunked optimally and added to the Vectara vector store. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)vectara = Vectara.from_documents( docs, embedding=FakeEmbeddings(size=768), doc_metadata={"speech": "state-of-the-union", "author": "Biden"},)Vectara's indexing API provides a file upload API where the file is handled directly by Vectara - pre-processed, chunked optimally and added to the Vectara vector store. |
1,073 | To use this, we added the add_files() method (as well as from_files()). Let's see this in action. We pick two PDF documents to upload: The "I have a dream" speech by Dr. KingChurchill's "We Shall Fight on the Beaches" speechimport tempfileimport urllib.requesturls = [ [ "https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf", "I-have-a-dream", "Dr. King" ], [ "https://www.parkwayschools.net/cms/lib/MO01931486/Centricity/Domain/1578/Churchill_Beaches_Speech.pdf", "we shall fight on the beaches", "Churchil" ],]files_list = []for url, _, _ in urls: name = tempfile.NamedTemporaryFile().name urllib.request.urlretrieve(url, name) files_list.append(name)docsearch: Vectara = Vectara.from_files( files=files_list, embedding=FakeEmbeddings(size=768), metadatas=[{"url": url, "speech": title, "author": author} for url, title, author in urls],)Similarity search‚ÄãThe simplest scenario for using Vectara is to perform a similarity search. query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search( query, n_sentence_context=0, filter="doc.speech = 'state-of-the-union'")print(found_docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson.Similarity search with score‚ÄãSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'state-of-the-union'", score_threshold=0.2,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: To use this, we added the add_files() method (as well as from_files()). Let's see this in action. We pick two PDF documents to upload: The "I have a dream" speech by Dr. KingChurchill's "We Shall Fight on the Beaches" speechimport tempfileimport urllib.requesturls = [ [ "https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf", "I-have-a-dream", "Dr. King" ], [ "https://www.parkwayschools.net/cms/lib/MO01931486/Centricity/Domain/1578/Churchill_Beaches_Speech.pdf", "we shall fight on the beaches", "Churchil" ],]files_list = []for url, _, _ in urls: name = tempfile.NamedTemporaryFile().name urllib.request.urlretrieve(url, name) files_list.append(name)docsearch: Vectara = Vectara.from_files( files=files_list, embedding=FakeEmbeddings(size=768), metadatas=[{"url": url, "speech": title, "author": author} for url, title, author in urls],)Similarity search‚ÄãThe simplest scenario for using Vectara is to perform a similarity search. query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search( query, n_sentence_context=0, filter="doc.speech = 'state-of-the-union'")print(found_docs[0].page_content) And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson.Similarity search with score‚ÄãSometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result.query = "What did the president say about Ketanji Brown Jackson"found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'state-of-the-union'", score_threshold=0.2,)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days |
1,074 | States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. Score: 0.8299499Now let's do similar search for content in the files we uploadedquery = "We must forever conduct our struggle"min_score = 1.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 1.2 we have 0 documentsquery = "We must forever conduct our struggle"min_score = 0.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 0.2 we have 5 documentsVectara as a Retriever​Vectara, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = vectara.as_retriever()retriever VectaraRetriever(tags=['Vectara'], metadata=None, vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x13b15e9b0>, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '2'})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. Score: 0.8299499Now let's do similar search for content in the files we uploadedquery = "We must forever conduct our struggle"min_score = 1.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 1.2 we have 0 documentsquery = "We must forever conduct our struggle"min_score = 0.2found_docs = vectara.similarity_search_with_score( query, filter="doc.speech = 'I-have-a-dream'", score_threshold=min_score,)print(f"With this threshold of {min_score} we have {len(found_docs)} documents") With this threshold of 0.2 we have 5 documentsVectara as a Retriever​Vectara, as all the other LangChain vectorstores, is most often used as a LangChain Retriever:retriever = vectara.as_retriever()retriever VectaraRetriever(tags=['Vectara'], metadata=None, vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x13b15e9b0>, search_type='similarity', search_kwargs={'lambda_val': 0.025, 'k': 5, 'filter': '', 'n_sentence_context': '2'})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', |
1,075 | metadata={'source': 'langchain', 'lang': 'eng', 'offset': '596', 'len': '97', 'speech': 'state-of-the-union', 'author': 'Biden'})Using Vectara as a SelfQuery Retriever‚Äãmetadata_field_info = [ AttributeInfo( name="speech", description="what name of the speech", type="string or list[string]", ), AttributeInfo( name="author", description="author of the speech", type="string or list[string]", ),]document_content_description = "the text of the speech"vectordb = Vectara()llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(llm, vectara, document_content_description, metadata_field_info, verbose=True)retriever.get_relevant_documents("what did Biden say about the freedom?") /Users/ofer/dev/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', value='Biden') limit=None [Document(page_content='Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '346', 'len': '67', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.', | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: metadata={'source': 'langchain', 'lang': 'eng', 'offset': '596', 'len': '97', 'speech': 'state-of-the-union', 'author': 'Biden'})Using Vectara as a SelfQuery Retriever‚Äãmetadata_field_info = [ AttributeInfo( name="speech", description="what name of the speech", type="string or list[string]", ), AttributeInfo( name="author", description="author of the speech", type="string or list[string]", ),]document_content_description = "the text of the speech"vectordb = Vectara()llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(llm, vectara, document_content_description, metadata_field_info, verbose=True)retriever.get_relevant_documents("what did Biden say about the freedom?") /Users/ofer/dev/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', value='Biden') limit=None [Document(page_content='Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '346', 'len': '67', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.', |
1,076 | the hardest years this nation has ever faced.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '740', 'len': '47', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '413', 'len': '77', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='We can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known. Now is the hour. \n\nOur moment of responsibility.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '906', 'len': '82', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '0', 'len': '63', 'speech': 'state-of-the-union', 'author': 'Biden'})]retriever.get_relevant_documents("what did Dr. King say about the freedom?") query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: the hardest years this nation has ever faced.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '740', 'len': '47', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '413', 'len': '77', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='We can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known. Now is the hour. \n\nOur moment of responsibility.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '906', 'len': '82', 'speech': 'state-of-the-union', 'author': 'Biden'}), Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '0', 'len': '63', 'speech': 'state-of-the-union', 'author': 'Biden'})]retriever.get_relevant_documents("what did Dr. King say about the freedom?") query='freedom' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='author', |
1,077 | 'eq'>, attribute='author', value='Dr. King') limit=None [Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: 'eq'>, attribute='author', value='Dr. King') limit=None [Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='And if America is to be a great nation, this must become true. So\nlet freedom ring from the prodigious hilltops of New Hampshire. Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado.', metadata={'lang': 'eng', 'section': '3', 'offset': '1534', 'len': '55', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom |
1,078 | from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia. Let freedom ring from Lookout\nMountain of Tennessee. Let freedom ring from every hill and molehill of Mississippi, from every\nmountain side. Let freedom ring . . .\nWhen we allow freedom to ring—when we let it ring from every city and every hamlet, from every state\nand every city, we will be able to speed up that day when all of God’s children, black men and white\nmen, Jews and Gentiles, Protestants and Catholics, will be able to join hands and sing in the words of the\nold Negro spiritual, “Free at last, Free at last, Great God a-mighty, We are free at last.”', metadata={'lang': 'eng', 'section': '3', 'offset': '1842', 'len': '52', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': |
1,079 | 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado. Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia.', metadata={'lang': 'eng', 'section': '3', 'offset': '1657', 'len': '57', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'})]PreviousvearchNextSemaDBConnecting to Vectara from LangChainSimilarity searchSimilarity search with scoreVectara as a RetrieverUsing Vectara as a SelfQuery RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. | Vectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy. ->: 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'}), Document(page_content='Let freedom ring from the mighty\nmountains of New York. Let freedom ring from the heightening Alleghenies of Pennsylvania. Let\nfreedom ring from the snowcapped Rockies of Colorado. Let freedom ring from the curvaceous slopes of\nCalifornia. But not only that. Let freedom ring from Stone Mountain of Georgia.', metadata={'lang': 'eng', 'section': '3', 'offset': '1657', 'len': '57', 'CreationDate': '1424880481', 'Producer': 'Adobe PDF Library 10.0', 'Author': 'Sasha Rolon-Pereira', 'Title': 'Martin Luther King Jr.pdf', 'Creator': 'Acrobat PDFMaker 10.1 for Word', 'ModDate': '1424880524', 'url': 'https://www.gilderlehrman.org/sites/default/files/inline-pdfs/king.dreamspeech.excerpts.pdf', 'speech': 'I-have-a-dream', 'author': 'Dr. King', 'title': 'Martin Luther King Jr.pdf'})]PreviousvearchNextSemaDBConnecting to Vectara from LangChainSimilarity searchSimilarity search with scoreVectara as a RetrieverUsing Vectara as a SelfQuery RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,080 | LanceDB | ü¶úÔ∏èüîó Langchain | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. ->: LanceDB | ü¶úÔ∏èüîó Langchain |
1,081 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesLanceDBLanceDBLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.pip install lancedbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import LanceDBfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()documents = CharacterTextSplitter().split_documents(documents)embeddings = OpenAIEmbeddings()import lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesLanceDBLanceDBLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.pip install lancedbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import LanceDBfrom langchain.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()documents = CharacterTextSplitter().split_documents(documents)embeddings = OpenAIEmbeddings()import lancedbdb = lancedb.connect("/tmp/lancedb")table = db.create_table( "my_table", data=[ { |
1,082 | "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)docsearch = LanceDB.from_documents(documents, embeddings, connection=table)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content) They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. ->: "my_table", data=[ { "vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1", } ], mode="overwrite",)docsearch = LanceDB.from_documents(documents, embeddings, connection=table)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content) They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. |
1,083 | budget and keep our neighborhoods safe. And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced. And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? Ban assault weapons and high-capacity magazines. Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued. These laws don’t infringe on the Second Amendment. They save lives. The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault. In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. ->: budget and keep our neighborhoods safe. And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced. And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? Ban assault weapons and high-capacity magazines. Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued. These laws don’t infringe on the Second Amendment. They save lives. The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault. In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of |
1,084 | been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.PreviousHologresNextLLMRailsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. | LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source. ->: been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.PreviousHologresNextLLMRailsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,085 | Cassandra | ü¶úÔ∏èüîó Langchain | Apache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database. | Apache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database. ->: Cassandra | ü¶úÔ∏èüîó Langchain |
1,086 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesCassandraOn this pageCassandraApache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database.Newest Cassandra releases natively support Vector Similarity Search.To run this notebook you need either a running Cassandra cluster equipped with Vector Search capabilities (in pre-release at the time of writing) or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.1.0"Please provide database connection parameters and secrets:‚Äãimport osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " | Apache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database. | Apache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesCassandraOn this pageCassandraApache Cassandra¬Æ is a NoSQL, row-oriented, highly scalable and highly available database.Newest Cassandra releases natively support Vector Similarity Search.To run this notebook you need either a running Cassandra cluster equipped with Vector Search capabilities (in pre-release at the time of writing) or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.1.0"Please provide database connection parameters and secrets:‚Äãimport osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " |
1,087 | (comma-separated, empty for localhost) " ).strip()depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object​from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorPlease provide OpenAI access key​We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Creation and usage of the Vector Store​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Cassandrafrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderSOURCE_FILE_NAME = "../../modules/state_of_the_union.txt"loader = TextLoader(SOURCE_FILE_NAME)documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embedding_function = OpenAIEmbeddings()table_name = "my_vector_db_table"docsearch = Cassandra.from_documents( documents=docs, embedding=embedding_function, session=session, keyspace=keyspace_name, table_name=table_name,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)## if you already have an index, you can load it and use it like this:# docsearch_preexisting = | Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database. | Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database. ->: (comma-separated, empty for localhost) " ).strip()depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object​from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorPlease provide OpenAI access key​We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Creation and usage of the Vector Store​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Cassandrafrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderSOURCE_FILE_NAME = "../../modules/state_of_the_union.txt"loader = TextLoader(SOURCE_FILE_NAME)documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embedding_function = OpenAIEmbeddings()table_name = "my_vector_db_table"docsearch = Cassandra.from_documents( documents=docs, embedding=embedding_function, session=session, keyspace=keyspace_name, table_name=table_name,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)## if you already have an index, you can load it and use it like this:# docsearch_preexisting = |
1,088 | it and use it like this:# docsearch_preexisting = Cassandra(# embedding=embedding_function,# session=session,# keyspace=keyspace_name,# table_name=table_name,# )# docs = docsearch_preexisting.similarity_search(query, k=2)print(docs[0].page_content)Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")Metadata filtering​You can specify filtering on metadata when running searches in the vector store. By default, when inserting documents, the only metadata is the "source" (but you can customize the metadata at insertion time).Since only one files was inserted, this is just a demonstration of how filters are passed:filter = {"source": SOURCE_FILE_NAME}filtered_docs = docsearch.similarity_search(query, filter=filter, k=5)print(f"{len(filtered_docs)} documents retrieved.")print(f"{filtered_docs[0].page_content[:64]} ...")filter = {"source": "nonexisting_file.txt"}filtered_docs2 = docsearch.similarity_search(query, filter=filter)print(f"{len(filtered_docs2)} documents retrieved.")Please visit the cassIO documentation for more on using vector stores with Langchain.PreviousBagelDBNextChromaPlease provide database connection parameters and secrets:Please provide OpenAI access keyCreation and usage of the Vector StoreMaximal Marginal Relevance SearchesMetadata filteringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database. | Apache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database. ->: it and use it like this:# docsearch_preexisting = Cassandra(# embedding=embedding_function,# session=session,# keyspace=keyspace_name,# table_name=table_name,# )# docs = docsearch_preexisting.similarity_search(query, k=2)print(docs[0].page_content)Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")Metadata filtering​You can specify filtering on metadata when running searches in the vector store. By default, when inserting documents, the only metadata is the "source" (but you can customize the metadata at insertion time).Since only one files was inserted, this is just a demonstration of how filters are passed:filter = {"source": SOURCE_FILE_NAME}filtered_docs = docsearch.similarity_search(query, filter=filter, k=5)print(f"{len(filtered_docs)} documents retrieved.")print(f"{filtered_docs[0].page_content[:64]} ...")filter = {"source": "nonexisting_file.txt"}filtered_docs2 = docsearch.similarity_search(query, filter=filter)print(f"{len(filtered_docs2)} documents retrieved.")Please visit the cassIO documentation for more on using vector stores with Langchain.PreviousBagelDBNextChromaPlease provide database connection parameters and secrets:Please provide OpenAI access keyCreation and usage of the Vector StoreMaximal Marginal Relevance SearchesMetadata filteringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
1,089 | Timescale Vector (Postgres) | ü¶úÔ∏èüîó Langchain | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: Timescale Vector (Postgres) | ü¶úÔ∏èüîó Langchain |
1,090 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTimescale Vector (Postgres)On this pageTimescale Vector (Postgres)This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries.What is Timescale Vector?‚ÄãTimescale Vector is PostgreSQL++ for AI applications.Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.Enhances pgvector with faster and more accurate similarity search on 100M+ vectors via DiskANN inspired indexing algorithm.Enables fast time-based vector search via automatic time-based partitioning and indexing.Provides a familiar SQL interface for querying vector embeddings and relational data.Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.Benefits | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTimescale Vector (Postgres)On this pageTimescale Vector (Postgres)This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries.What is Timescale Vector?‚ÄãTimescale Vector is PostgreSQL++ for AI applications.Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL.Enhances pgvector with faster and more accurate similarity search on 100M+ vectors via DiskANN inspired indexing algorithm.Enables fast time-based vector search via automatic time-based partitioning and indexing.Provides a familiar SQL interface for querying vector embeddings and relational data.Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.Benefits |
1,091 | time-series data in a single database.Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.Enables a worry-free experience with enterprise-grade security and compliance.How to access Timescale Vector‚ÄãTimescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)LangChain users get a 90-day free trial for Timescale Vector.To get started, signup to Timescale, create a new database and follow this notebook!See the Timescale Vector explainer blog for more details and performance benchmarks.See the installation instructions for more details on using Timescale Vector in python.Setup‚ÄãFollow these steps to get ready to follow this tutorial.# Pip install necessary packagespip install timescale-vectorpip install openaipip install tiktokenIn this example, we'll use OpenAIEmbeddings, so let's load your OpenAI API key.import os# Run export OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY...# Get openAI api key by reading local .env filefrom dotenv import load_dotenv, find_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ['OPENAI_API_KEY']# Get the API key and save it as an environment variable#import os#import getpass#os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from typing import List, TupleNext we'll import the needed Python libraries and libraries from LangChain. Note that we import the timescale-vector library as well as the TimescaleVector LangChain vectorstore.import timescale_vectorfrom datetime import datetime, timedeltafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders.json_loader import JSONLoaderfrom langchain.docstore.document import Documentfrom langchain.vectorstores.timescalevector import TimescaleVector1. Similarity Search with | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: time-series data in a single database.Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.Enables a worry-free experience with enterprise-grade security and compliance.How to access Timescale Vector‚ÄãTimescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)LangChain users get a 90-day free trial for Timescale Vector.To get started, signup to Timescale, create a new database and follow this notebook!See the Timescale Vector explainer blog for more details and performance benchmarks.See the installation instructions for more details on using Timescale Vector in python.Setup‚ÄãFollow these steps to get ready to follow this tutorial.# Pip install necessary packagespip install timescale-vectorpip install openaipip install tiktokenIn this example, we'll use OpenAIEmbeddings, so let's load your OpenAI API key.import os# Run export OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEY...# Get openAI api key by reading local .env filefrom dotenv import load_dotenv, find_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ['OPENAI_API_KEY']# Get the API key and save it as an environment variable#import os#import getpass#os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from typing import List, TupleNext we'll import the needed Python libraries and libraries from LangChain. Note that we import the timescale-vector library as well as the TimescaleVector LangChain vectorstore.import timescale_vectorfrom datetime import datetime, timedeltafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders.json_loader import JSONLoaderfrom langchain.docstore.document import Documentfrom langchain.vectorstores.timescalevector import TimescaleVector1. Similarity Search with |
1,092 | import TimescaleVector1. Similarity Search with Euclidean Distance (Default)‚ÄãFirst, we'll look at an example of doing a similarity search query on the State of the Union speech to find the most similar sentences to a given query sentence. We'll use the Euclidean distance as our similarity metric.# Load the text and split it into chunksloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Next, we'll load the service URL for our Timescale database. If you haven't already, signup for Timescale, and create a new database.Then, to connect to your PostgreSQL database, you'll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database. The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require. # Timescale Vector needs the service url to your cloud database. You can see this as soon as you create the # service in the cloud UI or in your credentials.sql fileSERVICE_URL = os.environ['TIMESCALE_SERVICE_URL']# Specify directly if testing#SERVICE_URL = "postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require"# # You can get also it from an enviornment variables. We suggest using a .env file.# import os# SERVICE_URL = os.environ.get("TIMESCALE_SERVICE_URL", "")Next we create a TimescaleVector vectorstore. We specify a collection name, which will be the name of the table our data is stored in. Note: When creating a new instance of TimescaleVector, the TimescaleVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique (i.e it doesn't already exist).# The TimescaleVector Module will create a table with the name of the collection.COLLECTION_NAME = | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: import TimescaleVector1. Similarity Search with Euclidean Distance (Default)‚ÄãFirst, we'll look at an example of doing a similarity search query on the State of the Union speech to find the most similar sentences to a given query sentence. We'll use the Euclidean distance as our similarity metric.# Load the text and split it into chunksloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Next, we'll load the service URL for our Timescale database. If you haven't already, signup for Timescale, and create a new database.Then, to connect to your PostgreSQL database, you'll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database. The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require. # Timescale Vector needs the service url to your cloud database. You can see this as soon as you create the # service in the cloud UI or in your credentials.sql fileSERVICE_URL = os.environ['TIMESCALE_SERVICE_URL']# Specify directly if testing#SERVICE_URL = "postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require"# # You can get also it from an enviornment variables. We suggest using a .env file.# import os# SERVICE_URL = os.environ.get("TIMESCALE_SERVICE_URL", "")Next we create a TimescaleVector vectorstore. We specify a collection name, which will be the name of the table our data is stored in. Note: When creating a new instance of TimescaleVector, the TimescaleVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique (i.e it doesn't already exist).# The TimescaleVector Module will create a table with the name of the collection.COLLECTION_NAME = |
1,093 | with the name of the collection.COLLECTION_NAME = "state_of_the_union_test"# Create a Timescale Vector instance from the collection of documentsdb = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=SERVICE_URL,)Now that we've loaded our data, we can perform a similarity search.query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18443380687035138 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18452197313308139 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: with the name of the collection.COLLECTION_NAME = "state_of_the_union_test"# Create a Timescale Vector instance from the collection of documentsdb = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=SERVICE_URL,)Now that we've loaded our data, we can perform a similarity search.query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18443380687035138 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.18452197313308139 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has |
1,094 | Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21720781018594182 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21724902288621384 A former top litigator in | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21720781018594182 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21724902288621384 A former top litigator in |
1,095 | 0.21724902288621384 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. --------------------------------------------------------------------------------Using a Timescale Vector as a Retriever​After initializing a TimescaleVector store, you can use it as a retriever.# Use TimescaleVector as a retrieverretriever = db.as_retriever()print(retriever) tags=['TimescaleVector', 'OpenAIEmbeddings'] metadata=None vectorstore=<langchain.vectorstores.timescalevector.TimescaleVector object at 0x10fc8d070> search_type='similarity' search_kwargs={}Let's look at an example of using Timescale Vector as a retriever with the RetrievalQA chain and the stuff chain.In this example, we'll ask the same query as above, but this time we'll pass the relevant documents returned from Timescale Vector to an LLM to use as context to answer our question.First we'll create our stuff chain:# Initialize GPT3.5 modelfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature = 0.1, model = 'gpt-3.5-turbo-16k')# Initialize a RetrievalQA class from a stuff chainfrom langchain.chains | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: 0.21724902288621384 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. --------------------------------------------------------------------------------Using a Timescale Vector as a Retriever​After initializing a TimescaleVector store, you can use it as a retriever.# Use TimescaleVector as a retrieverretriever = db.as_retriever()print(retriever) tags=['TimescaleVector', 'OpenAIEmbeddings'] metadata=None vectorstore=<langchain.vectorstores.timescalevector.TimescaleVector object at 0x10fc8d070> search_type='similarity' search_kwargs={}Let's look at an example of using Timescale Vector as a retriever with the RetrievalQA chain and the stuff chain.In this example, we'll ask the same query as above, but this time we'll pass the relevant documents returned from Timescale Vector to an LLM to use as context to answer our question.First we'll create our stuff chain:# Initialize GPT3.5 modelfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature = 0.1, model = 'gpt-3.5-turbo-16k')# Initialize a RetrievalQA class from a stuff chainfrom langchain.chains |
1,096 | class from a stuff chainfrom langchain.chains import RetrievalQAqa_stuff = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, verbose=True,)query = "What did the president say about Ketanji Brown Jackson?"response = qa_stuff.run(query) > Entering new RetrievalQA chain... > Finished chain.print(response) The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of our nation's top legal minds and will continue Justice Breyer's legacy of excellence. He also mentioned that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.2. Similarity Search with time-based filtering‚ÄãA key use case for Timescale Vector is efficient time-based vector search. Timescale Vector enables this by automatically partitioning vectors (and associated metadata) by time. This allows you to efficiently query vectors by both similarity to a query vector and time.Time-based vector search functionality is helpful for applications like:Storing and retrieving LLM response history (e.g. chatbots)Finding the most recent embeddings that are similar to a query vector (e.g recent news).Constraining similarity search to a relevant time range (e.g asking time-based questions about a knowledge base)To illustrate how to use TimescaleVector's time-based vector search functionality, we'll ask questions about the git log history for TimescaleDB . We'll illustrate how to add documents with a time-based uuid and how run similarity searches with time range filters.Extract content and metadata from git log JSON‚ÄãFirst lets load in the git log data into a new collection in our PostgreSQL database named timescale_commits.import jsonWe'll define a helper funciton to create a uuid for a document and associated vector embedding based on its timestamp. We'll use this function to create | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: class from a stuff chainfrom langchain.chains import RetrievalQAqa_stuff = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, verbose=True,)query = "What did the president say about Ketanji Brown Jackson?"response = qa_stuff.run(query) > Entering new RetrievalQA chain... > Finished chain.print(response) The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of our nation's top legal minds and will continue Justice Breyer's legacy of excellence. He also mentioned that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.2. Similarity Search with time-based filtering‚ÄãA key use case for Timescale Vector is efficient time-based vector search. Timescale Vector enables this by automatically partitioning vectors (and associated metadata) by time. This allows you to efficiently query vectors by both similarity to a query vector and time.Time-based vector search functionality is helpful for applications like:Storing and retrieving LLM response history (e.g. chatbots)Finding the most recent embeddings that are similar to a query vector (e.g recent news).Constraining similarity search to a relevant time range (e.g asking time-based questions about a knowledge base)To illustrate how to use TimescaleVector's time-based vector search functionality, we'll ask questions about the git log history for TimescaleDB . We'll illustrate how to add documents with a time-based uuid and how run similarity searches with time range filters.Extract content and metadata from git log JSON‚ÄãFirst lets load in the git log data into a new collection in our PostgreSQL database named timescale_commits.import jsonWe'll define a helper funciton to create a uuid for a document and associated vector embedding based on its timestamp. We'll use this function to create |
1,097 | its timestamp. We'll use this function to create a uuid for each git log entry.Important note: If you are working with documents and want the current date and time associated with vector for time-based search, you can skip this step. A uuid will be automatically generated when the documents are ingested by default.from timescale_vector import client# Function to take in a date string in the past and return a uuid v1def create_uuid(date_string: str): if date_string is None: return None time_format = '%a %b %d %H:%M:%S %Y %z' datetime_obj = datetime.strptime(date_string, time_format) uuid = client.uuid_from_time(datetime_obj) return str(uuid)Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the JSON document loader docs for more details.# Helper function to split name and email given an author string consisting of Name Lastname <email>def split_name(input_string: str) -> Tuple[str, str]: if input_string is None: return None, None start = input_string.find("<") end = input_string.find(">") name = input_string[:start].strip() email = input_string[start+1:end].strip() return name, email# Helper function to transform a date string into a timestamp_tz stringdef create_date(input_string: str) -> datetime: if input_string is None: return None # Define a dictionary to map month abbreviations to their numerical equivalents month_dict = { "Jan": "01", "Feb": "02", "Mar": "03", "Apr": "04", "May": "05", "Jun": "06", "Jul": "07", "Aug": "08", "Sep": "09", "Oct": "10", "Nov": "11", "Dec": "12", } # Split the input string into its components components = input_string.split() # Extract relevant information day = components[2] month = month_dict[components[1]] year = components[4] time = components[3] | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: its timestamp. We'll use this function to create a uuid for each git log entry.Important note: If you are working with documents and want the current date and time associated with vector for time-based search, you can skip this step. A uuid will be automatically generated when the documents are ingested by default.from timescale_vector import client# Function to take in a date string in the past and return a uuid v1def create_uuid(date_string: str): if date_string is None: return None time_format = '%a %b %d %H:%M:%S %Y %z' datetime_obj = datetime.strptime(date_string, time_format) uuid = client.uuid_from_time(datetime_obj) return str(uuid)Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the JSON document loader docs for more details.# Helper function to split name and email given an author string consisting of Name Lastname <email>def split_name(input_string: str) -> Tuple[str, str]: if input_string is None: return None, None start = input_string.find("<") end = input_string.find(">") name = input_string[:start].strip() email = input_string[start+1:end].strip() return name, email# Helper function to transform a date string into a timestamp_tz stringdef create_date(input_string: str) -> datetime: if input_string is None: return None # Define a dictionary to map month abbreviations to their numerical equivalents month_dict = { "Jan": "01", "Feb": "02", "Mar": "03", "Apr": "04", "May": "05", "Jun": "06", "Jul": "07", "Aug": "08", "Sep": "09", "Oct": "10", "Nov": "11", "Dec": "12", } # Split the input string into its components components = input_string.split() # Extract relevant information day = components[2] month = month_dict[components[1]] year = components[4] time = components[3] |
1,098 | year = components[4] time = components[3] timezone_offset_minutes = int(components[5]) # Convert the offset to minutes timezone_hours = timezone_offset_minutes // 60 # Calculate the hours timezone_minutes = timezone_offset_minutes % 60 # Calculate the remaining minutes # Create a formatted string for the timestamptz in PostgreSQL format timestamp_tz_str = f"{year}-{month}-{day} {time}+{timezone_hours:02}{timezone_minutes:02}" return timestamp_tz_str# Metadata extraction function to extract metadata from a JSON recorddef extract_metadata(record: dict, metadata: dict) -> dict: record_name, record_email = split_name(record["author"]) metadata["id"] = create_uuid(record["date"]) metadata["date"] = create_date(record["date"]) metadata["author_name"] = record_name metadata["author_email"] = record_email metadata["commit_hash"] = record["commit"] return metadataNext, you'll need to download the sample dataset and place it in the same directory as this notebook.You can use following command:# Download the file using curl and save it as commit_history.csv# Note: Execute this command in your terminal, in the same directory as the notebookcurl -O https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.jsonFinally we can initialize the JSON loader to parse the JSON records. We also remove empty records for simplicity.# Define path to the JSON file relative to this notebook# Change this to the path to your JSON fileFILE_PATH = "../../../../../ts_git_log.json"# Load data from JSON file and extract metadataloader = JSONLoader( file_path=FILE_PATH, jq_schema='.commit_history[]', text_content=False, metadata_func=extract_metadata)documents = loader.load()# Remove documents with None datesdocuments = [doc for doc in documents if doc.metadata["date"] is not None]print(documents[0]) page_content='{"commit": "44e41c12ab25e36c202f58e068ced262eadc8d16", "author": "Lakshmi Narayanan Sreethar<[email protected]>", "date": | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: year = components[4] time = components[3] timezone_offset_minutes = int(components[5]) # Convert the offset to minutes timezone_hours = timezone_offset_minutes // 60 # Calculate the hours timezone_minutes = timezone_offset_minutes % 60 # Calculate the remaining minutes # Create a formatted string for the timestamptz in PostgreSQL format timestamp_tz_str = f"{year}-{month}-{day} {time}+{timezone_hours:02}{timezone_minutes:02}" return timestamp_tz_str# Metadata extraction function to extract metadata from a JSON recorddef extract_metadata(record: dict, metadata: dict) -> dict: record_name, record_email = split_name(record["author"]) metadata["id"] = create_uuid(record["date"]) metadata["date"] = create_date(record["date"]) metadata["author_name"] = record_name metadata["author_email"] = record_email metadata["commit_hash"] = record["commit"] return metadataNext, you'll need to download the sample dataset and place it in the same directory as this notebook.You can use following command:# Download the file using curl and save it as commit_history.csv# Note: Execute this command in your terminal, in the same directory as the notebookcurl -O https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.jsonFinally we can initialize the JSON loader to parse the JSON records. We also remove empty records for simplicity.# Define path to the JSON file relative to this notebook# Change this to the path to your JSON fileFILE_PATH = "../../../../../ts_git_log.json"# Load data from JSON file and extract metadataloader = JSONLoader( file_path=FILE_PATH, jq_schema='.commit_history[]', text_content=False, metadata_func=extract_metadata)documents = loader.load()# Remove documents with None datesdocuments = [doc for doc in documents if doc.metadata["date"] is not None]print(documents[0]) page_content='{"commit": "44e41c12ab25e36c202f58e068ced262eadc8d16", "author": "Lakshmi Narayanan Sreethar<[email protected]>", "date": |
1,099 | Sreethar<[email protected]>", "date": "Tue Sep 5 21:03:21 2023 +0530", "change summary": "Fix segfault in set_integer_now_func", "change details": "When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 "}' metadata={'source': '/Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/ts_git_log.json', 'seq_num': 1, 'id': '8b407680-4c01-11ee-96a6-b82284ddccc6', 'date': '2023-09-5 21:03:21+0850', 'author_name': 'Lakshmi Narayanan Sreethar', 'author_email': '[email protected]', 'commit_hash': '44e41c12ab25e36c202f58e068ced262eadc8d16'}Load documents and metadata into TimescaleVector vectorstore‚ÄãNow that we have prepared our documents, let's process them and load them, along with their vector embedding representations into our TimescaleVector vectorstore.Since this is a demo, we will only load the first 500 records. In practice, you can load as many records as you want.NUM_RECORDS = 500documents = documents[:NUM_RECORDS]Then we use the CharacterTextSplitter to split the documents into smaller chunks if needed for easier embedding. Note that this splitting process retains the metadata for each document.# Split the documents into chunks for embeddingtext_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=200,)docs = text_splitter.split_documents(documents)Next we'll create a Timescale Vector instance from the collection of documents that we finished pre-processsing.First, we'll define a collection name, which will be the name of our table in the PostgreSQL database. We'll also define a time delta, which we pass to the time_partition_interval argument, which will be used to as the interval for partitioning the data by time. Each partition will consist of data for the specified length of time. We'll use 7 days for | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. | This notebook shows how to use the Postgres vector database Timescale Vector. You'll learn how to use TimescaleVector for (1) semantic search, (2) time-based vector search, (3) self-querying, and (4) how to create indexes to speed up queries. ->: Sreethar<[email protected]>", "date": "Tue Sep 5 21:03:21 2023 +0530", "change summary": "Fix segfault in set_integer_now_func", "change details": "When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 "}' metadata={'source': '/Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/ts_git_log.json', 'seq_num': 1, 'id': '8b407680-4c01-11ee-96a6-b82284ddccc6', 'date': '2023-09-5 21:03:21+0850', 'author_name': 'Lakshmi Narayanan Sreethar', 'author_email': '[email protected]', 'commit_hash': '44e41c12ab25e36c202f58e068ced262eadc8d16'}Load documents and metadata into TimescaleVector vectorstore‚ÄãNow that we have prepared our documents, let's process them and load them, along with their vector embedding representations into our TimescaleVector vectorstore.Since this is a demo, we will only load the first 500 records. In practice, you can load as many records as you want.NUM_RECORDS = 500documents = documents[:NUM_RECORDS]Then we use the CharacterTextSplitter to split the documents into smaller chunks if needed for easier embedding. Note that this splitting process retains the metadata for each document.# Split the documents into chunks for embeddingtext_splitter = CharacterTextSplitter( chunk_size=1000, chunk_overlap=200,)docs = text_splitter.split_documents(documents)Next we'll create a Timescale Vector instance from the collection of documents that we finished pre-processsing.First, we'll define a collection name, which will be the name of our table in the PostgreSQL database. We'll also define a time delta, which we pass to the time_partition_interval argument, which will be used to as the interval for partitioning the data by time. Each partition will consist of data for the specified length of time. We'll use 7 days for |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.