Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
800 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersGoogle Cloud Document AIGoogle Cloud Document AIDocument AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume.Learn more:Document AI overviewDocument AI videos and labsTry it!The module contains a PDF parser based on DocAI from Google Cloud.You need to install two libraries to use this parser:First, you need to set up a Google Cloud Storage (GCS) bucket and create your own Optical Character Recognition (OCR) processor as described here: https://cloud.google.com/document-ai/docs/create-processorThe GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://) and a PROCESSOR_NAME should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID or projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID/processorVersions/PROCESSOR_VERSION_ID. You can get it either programmatically or copy from the Prediction endpoint section of the Processor details tab in the Google Cloud Console.GCS_OUTPUT_PATH = "gs://BUCKET_NAME/FOLDER_PATH"PROCESSOR_NAME = "projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID"from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserNow, create a DocAIParser.parser = DocAIParser( location="us", processor_name=PROCESSOR_NAME, gcs_output_path=GCS_OUTPUT_PATH)For this example, you can use an Alphabet | Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. | Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersGoogle Cloud Document AIGoogle Cloud Document AIDocument AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume.Learn more:Document AI overviewDocument AI videos and labsTry it!The module contains a PDF parser based on DocAI from Google Cloud.You need to install two libraries to use this parser:First, you need to set up a Google Cloud Storage (GCS) bucket and create your own Optical Character Recognition (OCR) processor as described here: https://cloud.google.com/document-ai/docs/create-processorThe GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://) and a PROCESSOR_NAME should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID or projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID/processorVersions/PROCESSOR_VERSION_ID. You can get it either programmatically or copy from the Prediction endpoint section of the Processor details tab in the Google Cloud Console.GCS_OUTPUT_PATH = "gs://BUCKET_NAME/FOLDER_PATH"PROCESSOR_NAME = "projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID"from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserNow, create a DocAIParser.parser = DocAIParser( location="us", processor_name=PROCESSOR_NAME, gcs_output_path=GCS_OUTPUT_PATH)For this example, you can use an Alphabet |
801 | this example, you can use an Alphabet earnings report that's uploaded to a public GCS bucket.2022Q1_alphabet_earnings_release.pdfPass the document to the lazy_parse() method toblob = Blob(path="gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf")We'll get one document per page, 11 in total:docs = list(parser.lazy_parse(blob))print(len(docs)) 11You can run end-to-end parsing of a blob one-by-one. If you have many documents, it might be a better approach to batch them together and maybe even detach parsing from handling the results of parsing.operations = parser.docai_parse([blob])print([op.operation.name for op in operations]) ['projects/543079149601/locations/us/operations/16447136779727347991']You can check whether operations are finished:parser.is_running(operations) TrueAnd when they're finished, you can parse the results:parser.is_running(operations) Falseresults = parser.get_results(operations)print(results[0]) DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0')And now we can finally generate Documents from parsed results:docs = list(parser.parse_from_results(results))print(len(docs)) 11PreviousBeautiful SoupNextDoctran: extract propertiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. | Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. ->: this example, you can use an Alphabet earnings report that's uploaded to a public GCS bucket.2022Q1_alphabet_earnings_release.pdfPass the document to the lazy_parse() method toblob = Blob(path="gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf")We'll get one document per page, 11 in total:docs = list(parser.lazy_parse(blob))print(len(docs)) 11You can run end-to-end parsing of a blob one-by-one. If you have many documents, it might be a better approach to batch them together and maybe even detach parsing from handling the results of parsing.operations = parser.docai_parse([blob])print([op.operation.name for op in operations]) ['projects/543079149601/locations/us/operations/16447136779727347991']You can check whether operations are finished:parser.is_running(operations) TrueAnd when they're finished, you can parse the results:parser.is_running(operations) Falseresults = parser.get_results(operations)print(results[0]) DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0')And now we can finally generate Documents from parsed results:docs = list(parser.parse_from_results(results))print(len(docs)) 11PreviousBeautiful SoupNextDoctran: extract propertiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
802 | OpenAI metadata tagger | ü¶úÔ∏èüîó Langchain | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. ->: OpenAI metadata tagger | ü¶úÔ∏èüîó Langchain |
803 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersOpenAI metadata taggerOn this pageOpenAI metadata taggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.The OpenAIMetadataTagger document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. Note: This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid JSON Schema object as follows:from langchain.schema import Documentfrom langchain.chat_models import ChatOpenAIfrom langchain.document_transformers.openai_functions import create_metadata_taggerschema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersOpenAI metadata taggerOn this pageOpenAI metadata taggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.The OpenAIMetadataTagger document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. Note: This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid JSON Schema object as follows:from langchain.schema import Documentfrom langchain.chat_models import ChatOpenAIfrom langchain.document_transformers.openai_functions import create_metadata_taggerschema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", |
804 | }, }, "required": ["movie_title", "critic", "tone"],}# Must be an OpenAI model that supports functionsllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm)You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents:original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ),]enhanced_documents = document_transformer.transform_documents(original_documents)import jsonprint( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata.You can also initialize the document transformer with a Pydantic schema:from typing import Literalfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars")document_transformer = create_metadata_tagger(Properties, llm)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. ->: }, }, "required": ["movie_title", "critic", "tone"],}# Must be an OpenAI model that supports functionsllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm)You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents:original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ),]enhanced_documents = document_transformer.transform_documents(original_documents)import jsonprint( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata.You can also initialize the document transformer with a Pydantic schema:from typing import Literalfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars")document_transformer = create_metadata_tagger(Properties, llm)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + |
805 | *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}Customization​You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text.Anonymous critics are actually Roger Ebert.{input}""")document_transformer = create_metadata_tagger(schema, llm, prompt=prompt)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable": false}PreviousNucliaNextText embedding modelsCustomizationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. | It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. ->: *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}Customization​You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text.Anonymous critics are actually Roger Ebert.{input}""")document_transformer = create_metadata_tagger(schema, llm, prompt=prompt)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable": false}PreviousNucliaNextText embedding modelsCustomizationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
806 | Doctran: interrogate documents | ü¶úÔ∏èüîó Langchain | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: Doctran: interrogate documents | ü¶úÔ∏èüîó Langchain |
807 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: interrogate documentsOn this pageDoctran: interrogate documentsDocuments used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to "interrogate" documents.See this notebook for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents.pip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranQATransformerfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll interrogatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: interrogate documentsOn this pageDoctran: interrogate documentsDocuments used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to "interrogate" documents.See this notebook for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents.pip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranQATransformerfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll interrogatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our |
808 | have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been |
809 | of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive |
810 | service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents = | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents = |
811 | Psychic [email protected] documents = [Document(page_content=sample_text)]qa_transformer = DoctranQATransformer()transformed_document = await qa_transformer.atransform_documents(documents)Output‚ÄãAfter interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata.transformed_document = await qa_transformer.atransform_documents(documents)print(json.dumps(transformed_document[0].metadata, indent=2)) { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "Who is responsible for enhancing the network security?", "answer": "John Doe from the IT department is responsible for enhancing the network security." }, { "question": "Where should potential security risks or incidents be reported?", "answer": "Potential security risks or incidents should be reported to the dedicated team at [email protected]." }, { "question": "Who has been recognized for outstanding performance in customer service?", "answer": "Jane Smith has been recognized for her outstanding performance in customer service." }, { "question": "When is the open enrollment period for the employee benefits program?", "answer": "The document does not specify the exact dates for the open enrollment period for the employee benefits program, but it mentions that it is fast approaching." }, { "question": "Who should be contacted for questions or assistance regarding the employee benefits program?", "answer": "For questions or assistance regarding the employee benefits program, the HR representative, Michael Johnson, should be contacted." }, { "question": | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: Psychic [email protected] documents = [Document(page_content=sample_text)]qa_transformer = DoctranQATransformer()transformed_document = await qa_transformer.atransform_documents(documents)Output‚ÄãAfter interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata.transformed_document = await qa_transformer.atransform_documents(documents)print(json.dumps(transformed_document[0].metadata, indent=2)) { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "Who is responsible for enhancing the network security?", "answer": "John Doe from the IT department is responsible for enhancing the network security." }, { "question": "Where should potential security risks or incidents be reported?", "answer": "Potential security risks or incidents should be reported to the dedicated team at [email protected]." }, { "question": "Who has been recognized for outstanding performance in customer service?", "answer": "Jane Smith has been recognized for her outstanding performance in customer service." }, { "question": "When is the open enrollment period for the employee benefits program?", "answer": "The document does not specify the exact dates for the open enrollment period for the employee benefits program, but it mentions that it is fast approaching." }, { "question": "Who should be contacted for questions or assistance regarding the employee benefits program?", "answer": "For questions or assistance regarding the employee benefits program, the HR representative, Michael Johnson, should be contacted." }, { "question": |
812 | }, { "question": "Who has been acknowledged for managing the company's social media platforms?", "answer": "Sarah Thompson has been acknowledged for managing the company's social media platforms." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who has been recognized for their contributions to the development of the company's technology?", "answer": "David Rodriguez has been recognized for his contributions to the development of the company's technology." }, { "question": "When is the monthly R&D brainstorming session?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." }, { "question": "Who should be contacted for questions or concerns regarding the topics discussed in the document?", "answer": "For questions or concerns regarding the topics discussed in the document, Jason Fan, the Cofounder & CEO, should be contacted." } ] }PreviousDoctran: extract propertiesNextDoctran: language translationInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. | Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. ->: }, { "question": "Who has been acknowledged for managing the company's social media platforms?", "answer": "Sarah Thompson has been acknowledged for managing the company's social media platforms." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who has been recognized for their contributions to the development of the company's technology?", "answer": "David Rodriguez has been recognized for his contributions to the development of the company's technology." }, { "question": "When is the monthly R&D brainstorming session?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." }, { "question": "Who should be contacted for questions or concerns regarding the topics discussed in the document?", "answer": "For questions or concerns regarding the topics discussed in the document, Jason Fan, the Cofounder & CEO, should be contacted." } ] }PreviousDoctran: extract propertiesNextDoctran: language translationInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
813 | Nuclia | ü¶úÔ∏èüîó Langchain | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. ->: Nuclia | ü¶úÔ∏èüîó Langchain |
814 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersNucliaNucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.Nuclia Understanding API document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences.To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key.from langchain.document_transformers.nuclia_text_transform import NucliaTextTransformer#!pip install --upgrade protobuf#!pip install nucliadb-protosimport osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>"To use the Nuclia document transformer, you need to instantiate a NucliaUnderstandingAPI tool with enable_ml set to True:from langchain.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=True)The Nuclia document transformer must be called in async mode, so you need to use the atransform_documents method:import asynciofrom langchain.document_transformers.nuclia_text_transform import NucliaTextTransformerfrom langchain.schema.document import Documentasync def process(): documents = [ Document(page_content="<TEXT 1>", metadata={}), Document(page_content="<TEXT 2>", | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersNucliaNucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.Nuclia Understanding API document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences.To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key.from langchain.document_transformers.nuclia_text_transform import NucliaTextTransformer#!pip install --upgrade protobuf#!pip install nucliadb-protosimport osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>"To use the Nuclia document transformer, you need to instantiate a NucliaUnderstandingAPI tool with enable_ml set to True:from langchain.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=True)The Nuclia document transformer must be called in async mode, so you need to use the atransform_documents method:import asynciofrom langchain.document_transformers.nuclia_text_transform import NucliaTextTransformerfrom langchain.schema.document import Documentasync def process(): documents = [ Document(page_content="<TEXT 1>", metadata={}), Document(page_content="<TEXT 2>", |
815 | Document(page_content="<TEXT 2>", metadata={}), Document(page_content="<TEXT 3>", metadata={}), ] nuclia_transformer = NucliaTextTransformer(nua) transformed_documents = await nuclia_transformer.atransform_documents(documents) print(transformed_documents)asyncio.run(process())PreviousHTML to textNextOpenAI metadata taggerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. | Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. ->: Document(page_content="<TEXT 2>", metadata={}), Document(page_content="<TEXT 3>", metadata={}), ] nuclia_transformer = NucliaTextTransformer(nua) transformed_documents = await nuclia_transformer.atransform_documents(documents) print(transformed_documents)asyncio.run(process())PreviousHTML to textNextOpenAI metadata taggerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
816 | Doctran: extract properties | ü¶úÔ∏èüîó Langchain | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: Doctran: extract properties | ü¶úÔ∏èüîó Langchain |
817 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: extract propertiesOn this pageDoctran: extract propertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.Extracting metadata from documents is helpful for a variety of tasks, including:Classification: classifying documents into different categoriesData mining: Extract structured data that can be used for data analysisStyle transfer: Change the way text is written to more closely match expected user input, improving vector search resultspip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranPropertyExtractorfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll extract properties from.sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: extract propertiesOn this pageDoctran: extract propertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.Extracting metadata from documents is helpful for a variety of tasks, including:Classification: classifying documents into different categoriesData mining: Extract structured data that can be used for data analysisStyle transfer: Change the way text is written to more closely match expected user input, improving vector search resultspip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranPropertyExtractorfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll extract properties from.sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in |
818 | from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new |
819 | their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected]. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee |
820 | that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents = [Document(page_content=sample_text)]properties = [ { "name": "category", "description": "What type of email this | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic [email protected] documents = [Document(page_content=sample_text)]properties = [ { "name": "category", "description": "What type of email this |
821 | "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, },]property_extractor = DoctranPropertyExtractor(properties=properties)Output‚ÄãAfter extracting properties from a document, the result will be returned as a new document with properties provided in the metadataextracted_document = await property_extractor.atransform_documents( documents, properties=properties)print(json.dumps(extracted_document[0].metadata, indent=2)) { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez", "Jason Fan" ], "eli5": "This is an email from the CEO, Jason Fan, giving updates about different areas in the company. He talks about new security measures and praises John Doe for his work. He also mentions new hires and praises Jane Smith for her work in customer service. The CEO reminds everyone about the upcoming benefits enrollment and says to contact Michael Johnson with any questions. He talks about the marketing team's work and praises Sarah Thompson for increasing their social media followers. There's also a product launch event on July 15th. Lastly, he talks about the research and development projects and praises David Rodriguez for his work. There's a brainstorming session on July 10th." } | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, },]property_extractor = DoctranPropertyExtractor(properties=properties)Output‚ÄãAfter extracting properties from a document, the result will be returned as a new document with properties provided in the metadataextracted_document = await property_extractor.atransform_documents( documents, properties=properties)print(json.dumps(extracted_document[0].metadata, indent=2)) { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez", "Jason Fan" ], "eli5": "This is an email from the CEO, Jason Fan, giving updates about different areas in the company. He talks about new security measures and praises John Doe for his work. He also mentions new hires and praises Jane Smith for her work in customer service. The CEO reminds everyone about the upcoming benefits enrollment and says to contact Michael Johnson with any questions. He talks about the marketing team's work and praises Sarah Thompson for increasing their social media followers. There's also a product launch event on July 15th. Lastly, he talks about the research and development projects and praises David Rodriguez for his work. There's a brainstorming session on July 10th." } |
822 | a brainstorming session on July 10th." } }PreviousGoogle Cloud Document AINextDoctran: interrogate documentsInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. | We can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata. ->: a brainstorming session on July 10th." } }PreviousGoogle Cloud Document AINextDoctran: interrogate documentsInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
823 | Doctran: language translation | ü¶úÔ∏èüîó Langchain | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: Doctran: language translation | ü¶úÔ∏èüîó Langchain |
824 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: language translationOn this pageDoctran: language translationComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.However, it can still be useful to use an LLM to translate documents into other languages before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state-of-the-art embedding models are not available for a given language.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to translate documents between languages.pip install doctranfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranTextTranslatorfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll translatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersDoctran: language translationOn this pageDoctran: language translationComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.However, it can still be useful to use an LLM to translate documents into other languages before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state-of-the-art embedding models are not available for a given language.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to translate documents between languages.pip install doctranfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranTextTranslatorfrom dotenv import load_dotenvload_dotenv() TrueInput‚ÄãThis is the document we'll translatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we |
825 | security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected] Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the |
826 | as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""documents = [Document(page_content=sample_text)]qa_translator = DoctranTextTranslator(language="spanish")Output​After translating a document, the result will be returned as a new document with the page_content translated into the target languagetranslated_document = await qa_translator.atransform_documents(documents)print(translated_document[0].page_content) [Generado con ChatGPT] Documento confidencial - Solo para uso interno Fecha: 1 de julio de 2023 Asunto: Actualizaciones y discusiones sobre varios temas Estimado equipo, Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial. Medidas de seguridad y privacidad Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: [email protected]) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & [email protected]"""documents = [Document(page_content=sample_text)]qa_translator = DoctranTextTranslator(language="spanish")Output​After translating a document, the result will be returned as a new document with the page_content translated into the target languagetranslated_document = await qa_translator.atransform_documents(documents)print(translated_document[0].page_content) [Generado con ChatGPT] Documento confidencial - Solo para uso interno Fecha: 1 de julio de 2023 Asunto: Actualizaciones y discusiones sobre varios temas Estimado equipo, Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial. Medidas de seguridad y privacidad Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: [email protected]) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran |
827 | recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y directrices de protección de datos. Además, si se encuentran con cualquier riesgo de seguridad o incidente potencial, por favor repórtelo inmediatamente a nuestro equipo dedicado en [email protected]. Actualizaciones de RRHH y beneficios para empleados Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su sobresaliente rendimiento en el servicio al cliente. Jane ha recibido constantemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan asistencia, por favor contacten a nuestro representante de RRHH, Michael Johnson (teléfono: 418-492-3850, correo electrónico: [email protected]). Iniciativas y campañas de marketing Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar la conciencia de marca y fomentar la participación del cliente. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus excepcionales esfuerzos en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, por favor marquen sus calendarios para el próximo evento de lanzamiento de producto el 15 de julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa. Proyectos de investigación y desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el excepcional trabajo de David Rodríguez (correo electrónico: | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y directrices de protección de datos. Además, si se encuentran con cualquier riesgo de seguridad o incidente potencial, por favor repórtelo inmediatamente a nuestro equipo dedicado en [email protected]. Actualizaciones de RRHH y beneficios para empleados Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su sobresaliente rendimiento en el servicio al cliente. Jane ha recibido constantemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan asistencia, por favor contacten a nuestro representante de RRHH, Michael Johnson (teléfono: 418-492-3850, correo electrónico: [email protected]). Iniciativas y campañas de marketing Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar la conciencia de marca y fomentar la participación del cliente. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus excepcionales esfuerzos en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, por favor marquen sus calendarios para el próximo evento de lanzamiento de producto el 15 de julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa. Proyectos de investigación y desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el excepcional trabajo de David Rodríguez (correo electrónico: |
828 | trabajo de David Rodríguez (correo electrónico: [email protected]) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión de lluvia de ideas de I+D mensual, programada para el 10 de julio. Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de que no se comparte con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, no duden en ponerse en contacto conmigo directamente. Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos. Saludos cordiales, Jason Fan Cofundador y CEO Psychic [email protected]: interrogate documentsNextHTML to textInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. | Comparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically. ->: trabajo de David Rodríguez (correo electrónico: [email protected]) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión de lluvia de ideas de I+D mensual, programada para el 10 de julio. Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de que no se comparte con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, no duden en ponerse en contacto conmigo directamente. Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos. Saludos cordiales, Jason Fan Cofundador y CEO Psychic [email protected]: interrogate documentsNextHTML to textInputOutputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
829 | HTML to text | ü¶úÔ∏èüîó Langchain | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. ->: HTML to text | ü¶úÔ∏èüîó Langchain |
830 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersHTML to textHTML to texthtml2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be a valid Markdown (a text-to-HTML format).pip install html2textfrom langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s]from langchain.document_transformers import Html2TextTransformerurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]html2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[1000:2000] " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersHTML to textHTML to texthtml2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be a valid Markdown (a text-to-HTML format).pip install html2textfrom langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s]from langchain.document_transformers import Html2TextTransformerurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]html2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[1000:2000] " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': |
831 | cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49"docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c"PreviousDoctran: language translationNextNucliaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. | html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. ->: cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49"docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c"PreviousDoctran: language translationNextNucliaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
832 | Beautiful Soup | ü¶úÔ∏èüîó Langchain | Beautiful Soup is a Python package for parsing | Beautiful Soup is a Python package for parsing ->: Beautiful Soup | ü¶úÔ∏èüîó Langchain |
833 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersBeautiful SoupBeautiful SoupBeautiful Soup is a Python package for parsing
HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which | Beautiful Soup is a Python package for parsing | Beautiful Soup is a Python package for parsing ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersBeautiful SoupGoogle Cloud Document AIDoctran: extract propertiesDoctran: interrogate documentsDoctran: language translationHTML to textNucliaOpenAI metadata taggerText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument transformersBeautiful SoupBeautiful SoupBeautiful Soup is a Python package for parsing
HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which |
834 | is useful for web scraping.Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.For example, we can scrape text content within <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["p", "li", "div", "a"])docs_transformed[0].page_content[0:500] 'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as 'PreviousDocument transformersNextGoogle Cloud Document AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Beautiful Soup is a Python package for parsing | Beautiful Soup is a Python package for parsing ->: is useful for web scraping.Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.For example, we can scrape text content within <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["p", "li", "div", "a"])docs_transformed[0].page_content[0:500] 'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as 'PreviousDocument transformersNextGoogle Cloud Document AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
835 | Google Cloud Storage Directory | ü¶úÔ∏èüîó Langchain | Google Cloud Storage is a managed service for storing unstructured data. | Google Cloud Storage is a managed service for storing unstructured data. ->: Google Cloud Storage Directory | ü¶úÔ∏èüîó Langchain |
836 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGoogle Cloud Storage DirectoryOn this pageGoogle Cloud Storage DirectoryGoogle Cloud Storage | Google Cloud Storage is a managed service for storing unstructured data. | Google Cloud Storage is a managed service for storing unstructured data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGoogle Cloud Storage DirectoryOn this pageGoogle Cloud Storage DirectoryGoogle Cloud Storage |
837 | Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSDirectoryLoaderloader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]Specifying a prefix‚ÄãYou can also specify a prefix for more finegrained control over what files to load.loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")loader.load() | Google Cloud Storage is a managed service for storing unstructured data. | Google Cloud Storage is a managed service for storing unstructured data. ->: Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSDirectoryLoaderloader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]Specifying a prefix‚ÄãYou can also specify a prefix for more finegrained control over what files to load.loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")loader.load() |
838 | prefix="fake")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Google Cloud Storage is a managed service for storing unstructured data. | Google Cloud Storage is a managed service for storing unstructured data. ->: prefix="fake")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
839 | Google BigQuery | ü¶úÔ∏èüîó Langchain | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. ->: Google BigQuery | ü¶úÔ∏èüîó Langchain |
840 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGoogle BigQueryOn this pageGoogle BigQueryGoogle BigQuery is a serverless and cost-effective | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersacreomAirbyte CDKAirbyte GongAirbyte HubspotAirbyte JSONAirbyte SalesforceAirbyte ShopifyAirbyte StripeAirbyte TypeformAirbyte Zendesk SupportAirtableAlibaba Cloud MaxComputeApify DatasetArcGISArxivAssemblyAI Audio TranscriptsAsync ChromiumAsyncHtmlAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileAzure Document IntelligenceBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlessChatGPT DataCollege ConfidentialConcurrent LoaderConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDropboxDuckDBEmailEmbaasEPubEtherscanEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuawei OBS DirectoryHuawei OBS FileHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWiki DumpMerge Documents LoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft SharePointMicrosoft WordModern TreasuryMongoDBNews URLNotion DB 1/2Notion DB 2/2NucliaObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFrameAmazon TextractPolars DataFramePsychicPubMedPySparkReadTheDocs DocumentationRecursive URLRedditRoamRocksetrspaceRSS FeedsRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS FileTensorFlow Datasets2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameYouTube audioYouTube transcriptsDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsDocument loadersGoogle BigQueryOn this pageGoogle BigQueryGoogle BigQuery is a serverless and cost-effective |
841 | BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. ->: BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. |
842 | BigQuery is a part of the Google Cloud Platform.Load a BigQuery query with one document per row.#!pip install google-cloud-bigqueryfrom langchain.document_loaders import BigQueryLoaderBASE_QUERY = """SELECT id, dna_sequence, organismFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""Basic Usage‚Äãloader = BigQueryLoader(BASE_QUERY)data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)]Specifying Which Columns are Content vs Metadata‚Äãloader = BigQueryLoader( BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"],)data = loader.load()print(data) [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)]Adding Source to Metadata‚Äã# Note that the `id` column is being returned twice, with one instance aliased as `source`ALIASED_QUERY = """SELECT id, dna_sequence, | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. ->: BigQuery is a part of the Google Cloud Platform.Load a BigQuery query with one document per row.#!pip install google-cloud-bigqueryfrom langchain.document_loaders import BigQueryLoaderBASE_QUERY = """SELECT id, dna_sequence, organismFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""Basic Usage‚Äãloader = BigQueryLoader(BASE_QUERY)data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)]Specifying Which Columns are Content vs Metadata‚Äãloader = BigQueryLoader( BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"],)data = loader.load()print(data) [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)]Adding Source to Metadata‚Äã# Note that the `id` column is being returned twice, with one instance aliased as `source`ALIASED_QUERY = """SELECT id, dna_sequence, |
843 | = """SELECT id, dna_sequence, organism, id as sourceFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"])data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)]PreviousGitHubNextGoogle Cloud Storage DirectoryBasic UsageSpecifying Which Columns are Content vs MetadataAdding Source to MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. ->: = """SELECT id, dna_sequence, organism, id as sourceFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"])data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)]PreviousGitHubNextGoogle Cloud Storage DirectoryBasic UsageSpecifying Which Columns are Content vs MetadataAdding Source to MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
844 | Google Vertex AI MatchingEngine | ü¶úÔ∏èüîó Langchain | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. ->: Google Vertex AI MatchingEngine | ü¶úÔ∏èüîó Langchain |
845 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesGoogle Vertex AI MatchingEngineOn this pageGoogle Vertex AI MatchingEngineThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.Vertex AI Matching Engine provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.Note: This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To see how to create an index refer to the section Create Index and deploy it to an EndpointCreate VectorStore from texts‚Äãfrom langchain.vectorstores import MatchingEnginetexts = [ "The cat sat on", "the mat.", "I like to", "eat pizza for", "dinner.", "The sun sets", "in the west.",]vector_store = MatchingEngine.from_components( texts=texts, project_id="<my_project_id>", region="<my_region>", gcs_bucket_uri="<my_gcs_bucket>", | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesGoogle Vertex AI MatchingEngineOn this pageGoogle Vertex AI MatchingEngineThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.Vertex AI Matching Engine provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.Note: This module expects an endpoint and deployed index already created as the creation time takes close to one hour. To see how to create an index refer to the section Create Index and deploy it to an EndpointCreate VectorStore from texts‚Äãfrom langchain.vectorstores import MatchingEnginetexts = [ "The cat sat on", "the mat.", "I like to", "eat pizza for", "dinner.", "The sun sets", "in the west.",]vector_store = MatchingEngine.from_components( texts=texts, project_id="<my_project_id>", region="<my_region>", gcs_bucket_uri="<my_gcs_bucket>", |
846 | gcs_bucket_uri="<my_gcs_bucket>", index_id="<my_matching_engine_index_id>", endpoint_id="<my_matching_engine_endpoint_id>",)vector_store.add_texts(texts=texts)vector_store.similarity_search("lunch", k=2)Create Index and deploy it to an Endpoint‚ÄãImports, Constants and Configs‚Äã# Installing dependencies.pip install tensorflow \ google-cloud-aiplatform \ tensorflow-hub \ tensorflow-textimport osimport jsonfrom google.cloud import aiplatformimport tensorflow_hub as hubimport tensorflow_textPROJECT_ID = "<my_project_id>"REGION = "<my_region>"VPC_NETWORK = "<my_vpc_network_name>"PEERING_RANGE_NAME = "ann-langchain-me-range" # Name for creating the VPC peering.BUCKET_URI = "gs://<bucket_uri>"# The number of dimensions for the tensorflow universal sentence encoder.# If other embedder is used, the dimensions would probably need to change.DIMENSIONS = 512DISPLAY_NAME = "index-test-name"EMBEDDING_DIR = f"{BUCKET_URI}/banana"DEPLOYED_INDEX_ID = "endpoint-test-name"PROJECT_NUMBER = !gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'PROJECT_NUMBER = PROJECT_NUMBER[0]VPC_NETWORK_FULL = f"projects/{PROJECT_NUMBER}/global/networks/{VPC_NETWORK}"# Change this if you need the VPC to be created.CREATE_VPC = False# Set the project id gcloud config set project {PROJECT_ID}# Remove the if condition to run the encapsulated codeif CREATE_VPC: # Create a VPC network gcloud compute networks create {VPC_NETWORK} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID} # Add necessary firewall rules gcloud compute firewall-rules create {VPC_NETWORK}-allow-icmp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow icmp gcloud compute firewall-rules create {VPC_NETWORK}-allow-internal --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9 gcloud compute firewall-rules create {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. ->: gcs_bucket_uri="<my_gcs_bucket>", index_id="<my_matching_engine_index_id>", endpoint_id="<my_matching_engine_endpoint_id>",)vector_store.add_texts(texts=texts)vector_store.similarity_search("lunch", k=2)Create Index and deploy it to an Endpoint‚ÄãImports, Constants and Configs‚Äã# Installing dependencies.pip install tensorflow \ google-cloud-aiplatform \ tensorflow-hub \ tensorflow-textimport osimport jsonfrom google.cloud import aiplatformimport tensorflow_hub as hubimport tensorflow_textPROJECT_ID = "<my_project_id>"REGION = "<my_region>"VPC_NETWORK = "<my_vpc_network_name>"PEERING_RANGE_NAME = "ann-langchain-me-range" # Name for creating the VPC peering.BUCKET_URI = "gs://<bucket_uri>"# The number of dimensions for the tensorflow universal sentence encoder.# If other embedder is used, the dimensions would probably need to change.DIMENSIONS = 512DISPLAY_NAME = "index-test-name"EMBEDDING_DIR = f"{BUCKET_URI}/banana"DEPLOYED_INDEX_ID = "endpoint-test-name"PROJECT_NUMBER = !gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'PROJECT_NUMBER = PROJECT_NUMBER[0]VPC_NETWORK_FULL = f"projects/{PROJECT_NUMBER}/global/networks/{VPC_NETWORK}"# Change this if you need the VPC to be created.CREATE_VPC = False# Set the project id gcloud config set project {PROJECT_ID}# Remove the if condition to run the encapsulated codeif CREATE_VPC: # Create a VPC network gcloud compute networks create {VPC_NETWORK} --bgp-routing-mode=regional --subnet-mode=auto --project={PROJECT_ID} # Add necessary firewall rules gcloud compute firewall-rules create {VPC_NETWORK}-allow-icmp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow icmp gcloud compute firewall-rules create {VPC_NETWORK}-allow-internal --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow all --source-ranges 10.128.0.0/9 gcloud compute firewall-rules create {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} |
847 | {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:3389 gcloud compute firewall-rules create {VPC_NETWORK}-allow-ssh --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:22 # Reserve IP range gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={VPC_NETWORK} --purpose=VPC_PEERING --project={PROJECT_ID} --description="peering range" # Set up peering with service networking # Your account must have the "Compute Network Admin" role to run the following. gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={VPC_NETWORK} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}# Creating bucket. gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URIUsing Tensorflow Universal Sentence Encoder as an Embedder‚Äã# Load the Universal Sentence Encoder modulemodule_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"model = hub.load(module_url)# Generate embeddings for each wordembeddings = model(["banana"])Inserting a test embedding‚Äãinitial_config = { "id": "banana_id", "embedding": [float(x) for x in list(embeddings.numpy()[0])],}with open("data.json", "w") as f: json.dump(initial_config, f)gsutil cp data.json {EMBEDDING_DIR}/file.jsonaiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)Creating Index‚Äãmy_index = aiplatform.MatchingEngineIndex.create_tree_ah_index( display_name=DISPLAY_NAME, contents_delta_uri=EMBEDDING_DIR, dimensions=DIMENSIONS, approximate_neighbors_count=150, distance_measure_type="DOT_PRODUCT_DISTANCE",)Creating Endpoint‚Äãmy_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create( display_name=f"{DISPLAY_NAME}-endpoint", network=VPC_NETWORK_FULL,)Deploy Index‚Äãmy_index_endpoint = my_index_endpoint.deploy_index( index=my_index, | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. ->: {VPC_NETWORK}-allow-rdp --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:3389 gcloud compute firewall-rules create {VPC_NETWORK}-allow-ssh --network {VPC_NETWORK} --priority 65534 --project {PROJECT_ID} --allow tcp:22 # Reserve IP range gcloud compute addresses create {PEERING_RANGE_NAME} --global --prefix-length=16 --network={VPC_NETWORK} --purpose=VPC_PEERING --project={PROJECT_ID} --description="peering range" # Set up peering with service networking # Your account must have the "Compute Network Admin" role to run the following. gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --network={VPC_NETWORK} --ranges={PEERING_RANGE_NAME} --project={PROJECT_ID}# Creating bucket. gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URIUsing Tensorflow Universal Sentence Encoder as an Embedder‚Äã# Load the Universal Sentence Encoder modulemodule_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"model = hub.load(module_url)# Generate embeddings for each wordembeddings = model(["banana"])Inserting a test embedding‚Äãinitial_config = { "id": "banana_id", "embedding": [float(x) for x in list(embeddings.numpy()[0])],}with open("data.json", "w") as f: json.dump(initial_config, f)gsutil cp data.json {EMBEDDING_DIR}/file.jsonaiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)Creating Index‚Äãmy_index = aiplatform.MatchingEngineIndex.create_tree_ah_index( display_name=DISPLAY_NAME, contents_delta_uri=EMBEDDING_DIR, dimensions=DIMENSIONS, approximate_neighbors_count=150, distance_measure_type="DOT_PRODUCT_DISTANCE",)Creating Endpoint‚Äãmy_index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create( display_name=f"{DISPLAY_NAME}-endpoint", network=VPC_NETWORK_FULL,)Deploy Index‚Äãmy_index_endpoint = my_index_endpoint.deploy_index( index=my_index, |
848 | index=my_index, deployed_index_id=DEPLOYED_INDEX_ID)my_index_endpoint.deployed_indexesPreviousMarqoNextMeilisearchCreate VectorStore from textsCreate Index and deploy it to an EndpointImports, Constants and ConfigsUsing Tensorflow Universal Sentence Encoder as an EmbedderInserting a test embeddingCreating IndexCreating EndpointDeploy IndexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. | This notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database. ->: index=my_index, deployed_index_id=DEPLOYED_INDEX_ID)my_index_endpoint.deployed_indexesPreviousMarqoNextMeilisearchCreate VectorStore from textsCreate Index and deploy it to an EndpointImports, Constants and ConfigsUsing Tensorflow Universal Sentence Encoder as an EmbedderInserting a test embeddingCreating IndexCreating EndpointDeploy IndexCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
849 | Hologres | ü¶úÔ∏èüîó Langchain | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. ->: Hologres | ü¶úÔ∏èüîó Langchain |
850 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesHologresHologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.
Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. Hologres provides vector database functionality by adopting Proxima. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesHologresHologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.
Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. Hologres provides vector database functionality by adopting Proxima. |
851 | Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open-source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.This notebook shows how to use functionality related to the Hologres Proxima vector database. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. ->: Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open-source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.This notebook shows how to use functionality related to the Hologres Proxima vector database. |
852 | Click here to fast deploy a Hologres cloud instance.#!pip install psycopg2from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import HologresSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to Hologres by setting related ENVIRONMENTS.export PG_HOST={host}export PG_PORT={port} # Optional, default is 80export PG_DATABASE={db_name} # Optional, default is postgresexport PG_USER={username}export PG_PASSWORD={password}Then store your embeddings and documents into Hologresimport osconnection_string = Hologres.connection_string_from_db_params( host=os.environ.get("PGHOST", "localhost"), port=int(os.environ.get("PGPORT", "80")), database=os.environ.get("PGDATABASE", "postgres"), user=os.environ.get("PGUSER", "postgres"), password=os.environ.get("PGPASSWORD", "postgres"),)vector_db = Hologres.from_documents( docs, embeddings, connection_string=connection_string, table_name="langchain_example_embeddings",)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. ->: Click here to fast deploy a Hologres cloud instance.#!pip install psycopg2from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import HologresSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to Hologres by setting related ENVIRONMENTS.export PG_HOST={host}export PG_PORT={port} # Optional, default is 80export PG_DATABASE={db_name} # Optional, default is postgresexport PG_USER={username}export PG_PASSWORD={password}Then store your embeddings and documents into Hologresimport osconnection_string = Hologres.connection_string_from_db_params( host=os.environ.get("PGHOST", "localhost"), port=int(os.environ.get("PGPORT", "80")), database=os.environ.get("PGDATABASE", "postgres"), user=os.environ.get("PGUSER", "postgres"), password=os.environ.get("PGPASSWORD", "postgres"),)vector_db = Hologres.from_documents( docs, embeddings, connection_string=connection_string, table_name="langchain_example_embeddings",)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional |
853 | One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousFaissNextLanceDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. | Hologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. ->: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousFaissNextLanceDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
854 | PGVector | ü¶úÔ∏èüîó Langchain | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: PGVector | ü¶úÔ∏èüîó Langchain |
855 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesPGVectorOn this pagePGVectorPGVector is an open-source vector similarity search for PostgresIt supports:exact and approximate nearest neighbor searchL2 distance, inner product, and cosine distanceThis notebook shows how to use the Postgres vector database (PGVector).See the installation instruction.# Pip install necessary packagepip install pgvectorpip install openaipip install psycopg2-binarypip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")## Loading Environment Variablesfrom typing import List, Tuplefrom dotenv import load_dotenvload_dotenv() Falsefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.pgvector import PGVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesPGVectorOn this pagePGVectorPGVector is an open-source vector similarity search for PostgresIt supports:exact and approximate nearest neighbor searchL2 distance, inner product, and cosine distanceThis notebook shows how to use the Postgres vector database (PGVector).See the installation instruction.# Pip install necessary packagepip install pgvectorpip install openaipip install psycopg2-binarypip install tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")## Loading Environment Variablesfrom typing import List, Tuplefrom dotenv import load_dotenvload_dotenv() Falsefrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.pgvector import PGVectorfrom langchain.document_loaders import TextLoaderfrom langchain.docstore.document import Documentloader = |
856 | import Documentloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# PGVector needs the connection string to the database.CONNECTION_STRING = "postgresql+psycopg2://harrisonchase@localhost:5432/test3"# # Alternatively, you can create it from enviornment variables.# import os# CONNECTION_STRING = PGVector.connection_string_from_db_params(# driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),# host=os.environ.get("PGVECTOR_HOST", "localhost"),# port=int(os.environ.get("PGVECTOR_PORT", "5432")),# database=os.environ.get("PGVECTOR_DATABASE", "postgres"),# user=os.environ.get("PGVECTOR_USER", "postgres"),# password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),# )Similarity Search with Euclidean Distance (Default)​# The PGVector Module will try to create a table with the name of the collection.# So, make sure that the collection name is unique and the user has the permission to create a table.COLLECTION_NAME = "state_of_the_union_test"db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING,)query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18456886638850434 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: import Documentloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()# PGVector needs the connection string to the database.CONNECTION_STRING = "postgresql+psycopg2://harrisonchase@localhost:5432/test3"# # Alternatively, you can create it from enviornment variables.# import os# CONNECTION_STRING = PGVector.connection_string_from_db_params(# driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),# host=os.environ.get("PGVECTOR_HOST", "localhost"),# port=int(os.environ.get("PGVECTOR_PORT", "5432")),# database=os.environ.get("PGVECTOR_DATABASE", "postgres"),# user=os.environ.get("PGVECTOR_USER", "postgres"),# password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),# )Similarity Search with Euclidean Distance (Default)​# The PGVector Module will try to create a table with the name of the collection.# So, make sure that the collection name is unique and the user has the permission to create a table.COLLECTION_NAME = "state_of_the_union_test"db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING,)query = "What did the president say about Ketanji Brown Jackson"docs_with_score = db.similarity_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18456886638850434 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army |
857 | this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21742627672631343 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22641793174529334 And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.21742627672631343 A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22641793174529334 And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught |
858 | bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22670040608054465 Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.22670040608054465 Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who |
859 | and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. --------------------------------------------------------------------------------Maximal Marginal Relevance Search (MMR)​Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.docs_with_score = db.max_marginal_relevance_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18453882564037527 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.23523731441720075 We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. --------------------------------------------------------------------------------Maximal Marginal Relevance Search (MMR)​Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.docs_with_score = db.max_marginal_relevance_search_with_score(query)for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.18453882564037527 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.23523731441720075 We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a |
860 | Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2448441215698569 One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2448441215698569 One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their |
861 | games. He loved building Legos with their daughter. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2513994424701056 And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly. --------------------------------------------------------------------------------Working with vectorstore​Above, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore. | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: games. He loved building Legos with their daughter. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.2513994424701056 And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putin’s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly. --------------------------------------------------------------------------------Working with vectorstore​Above, we created a vectorstore from scratch. However, often times we want to work with an existing vectorstore. |
862 | In order to do that, we can initialize it directly.store = PGVector( collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, embedding_function=embeddings,)Add documents​We can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['048c2e14-1cf3-11ee-8777-e65801318980']docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 3.3203430005457335e-09)docs_with_score[1] (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404395365581814)Overriding a vectorstore​If you have an existing collection, you override it by doing from_documents and setting pre_delete_collection = Truedb = PGVector.from_documents( documents=docs, embedding=embeddings, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, pre_delete_collection=True,)docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='A former top | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: In order to do that, we can initialize it directly.store = PGVector( collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, embedding_function=embeddings,)Add documents​We can add documents to the existing vectorstore.store.add_documents([Document(page_content="foo")]) ['048c2e14-1cf3-11ee-8777-e65801318980']docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='foo', metadata={}), 3.3203430005457335e-09)docs_with_score[1] (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404395365581814)Overriding a vectorstore​If you have an existing collection, you override it by doing from_documents and setting pre_delete_collection = Truedb = PGVector.from_documents( documents=docs, embedding=embeddings, collection_name=COLLECTION_NAME, connection_string=CONNECTION_STRING, pre_delete_collection=True,)docs_with_score = db.similarity_search_with_score("foo")docs_with_score[0] (Document(page_content='A former top |
863 | (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404115088144465)Using a VectorStore as a Retriever​retriever = store.as_retriever()print(retriever) tags=None metadata=None vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x29f94f880> search_type='similarity' search_kwargs={}PreviousPostgres EmbeddingNextPineconeSimilarity Search with Euclidean Distance (Default)Maximal Marginal Relevance Search (MMR)Working with vectorstoreAdd documentsOverriding a vectorstoreUsing a VectorStore as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | PGVector is an open-source vector similarity search for Postgres | PGVector is an open-source vector similarity search for Postgres ->: (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.2404115088144465)Using a VectorStore as a Retriever​retriever = store.as_retriever()print(retriever) tags=None metadata=None vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x29f94f880> search_type='similarity' search_kwargs={}PreviousPostgres EmbeddingNextPineconeSimilarity Search with Euclidean Distance (Default)Maximal Marginal Relevance Search (MMR)Working with vectorstoreAdd documentsOverriding a vectorstoreUsing a VectorStore as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
864 | Zilliz | ü¶úÔ∏èüîó Langchain | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ, | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ, ->: Zilliz | ü¶úÔ∏èüîó Langchain |
865 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesZillizZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ,This notebook shows how to use functionality related to the Zilliz Cloud managed vector database.To run, you should have a Zilliz Cloud instance up and running. Here are the installation instructionspip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑# replaceZILLIZ_CLOUD_URI = "" # example: "https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536"ZILLIZ_CLOUD_USERNAME = "" # example: "username"ZILLIZ_CLOUD_PASSWORD = "" # example: "*********"ZILLIZ_CLOUD_API_KEY = "" # example: "*********" (for serverless clusters which can be used as replacements for user and password)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ, | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ, ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesZillizZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus¬Æ,This notebook shows how to use functionality related to the Zilliz Cloud managed vector database.To run, you should have a Zilliz Cloud instance up and running. Here are the installation instructionspip install pymilvusWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑# replaceZILLIZ_CLOUD_URI = "" # example: "https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536"ZILLIZ_CLOUD_USERNAME = "" # example: "username"ZILLIZ_CLOUD_PASSWORD = "" # example: "*********"ZILLIZ_CLOUD_API_KEY = "" # example: "*********" (for serverless clusters which can be used as replacements for user and password)from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Milvusfrom |
866 | langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={ "uri": ZILLIZ_CLOUD_URI, "user": ZILLIZ_CLOUD_USERNAME, "password": ZILLIZ_CLOUD_PASSWORD, # "token": ZILLIZ_CLOUD_API_KEY, # API key, for serverless clusters which can be used as replacements for user and password "secure": True, },)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'PreviousZepNextRetrieversCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®, | Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®, ->: langchain.vectorstores import Milvusfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vector_db = Milvus.from_documents( docs, embeddings, connection_args={ "uri": ZILLIZ_CLOUD_URI, "user": ZILLIZ_CLOUD_USERNAME, "password": ZILLIZ_CLOUD_PASSWORD, # "token": ZILLIZ_CLOUD_API_KEY, # API key, for serverless clusters which can be used as replacements for user and password "secure": True, },)query = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)docs[0].page_content 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'PreviousZepNextRetrieversCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
867 | OpenSearch | ü¶úÔ∏èüîó Langchain | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: OpenSearch | ü¶úÔ∏èüîó Langchain |
868 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesOpenSearchOn this pageOpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.This notebook shows how to use functionality related to the OpenSearch database.To run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesOpenSearchOn this pageOpenSearchOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.This notebook shows how to use functionality related to the OpenSearch database.To run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting. |
869 | Check this for more details.Installation‚ÄãInstall the Python client.pip install opensearch-pyWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import OpenSearchVectorSearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()similarity_search using Approximate k-NN‚Äãsimilarity_search using Approximate k-NN Search with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200")# If using the default Docker installation, use this instantiation instead:# docsearch = OpenSearchVectorSearch.from_documents(# docs,# embeddings,# opensearch_url="https://localhost:9200",# http_auth=("admin", "admin"),# use_ssl = False,# verify_certs = False,# ssl_assert_hostname = False,# ssl_show_warn = False,# )query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query, k=10)print(docs[0].page_content)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)similarity_search using Script Scoring‚Äãsimilarity_search using Script Scoring with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: Check this for more details.Installation‚ÄãInstall the Python client.pip install opensearch-pyWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import OpenSearchVectorSearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()similarity_search using Approximate k-NN‚Äãsimilarity_search using Approximate k-NN Search with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200")# If using the default Docker installation, use this instantiation instead:# docsearch = OpenSearchVectorSearch.from_documents(# docs,# embeddings,# opensearch_url="https://localhost:9200",# http_auth=("admin", "admin"),# use_ssl = False,# verify_certs = False,# ssl_assert_hostname = False,# ssl_show_warn = False,# )query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query, k=10)print(docs[0].page_content)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)similarity_search using Script Scoring‚Äãsimilarity_search using Script Scoring with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, |
870 | docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring",)print(docs[0].page_content)similarity_search using Painless Scripting​similarity_search using Painless Scripting with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter,)print(docs[0].page_content)Maximum marginal relevance search (MMR)​If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10, lambda_param=0.5)Using a preexisting OpenSearch instance​It's also possible to use a preexisting OpenSearch instance with documents that already have vectors present.# this is just an example, you would need to change these values to point to another opensearch instancedocsearch = OpenSearchVectorSearch( index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200",)# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadatadocs = docsearch.similarity_search( "Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring",)print(docs[0].page_content)similarity_search using Painless Scripting​similarity_search using Painless Scripting with Custom Parametersdocsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search( "What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter,)print(docs[0].page_content)Maximum marginal relevance search (MMR)​If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10, lambda_param=0.5)Using a preexisting OpenSearch instance​It's also possible to use a preexisting OpenSearch instance with documents that already have vectors present.# this is just an example, you would need to change these values to point to another opensearch instancedocsearch = OpenSearchVectorSearch( index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200",)# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadatadocs = docsearch.similarity_search( "Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", |
871 | space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata",)Using AOSS (Amazon OpenSearch Service Serverless)‚Äã# This is just an example to show how to use AOSS with faiss engine and efficient_filter, you need to set proper values.service = 'aoss' # must set the service as 'aoss'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index-using-aoss", engine="faiss",)docs = docsearch.similarity_search( "What is feature selection", efficient_filter=filter, k=200,)Using AOS (Amazon OpenSearch Service)‚Äã# This is just an example to show how to use AOS , you need to set proper values.service = 'es' # must set the service as 'es'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index",)docs = docsearch.similarity_search( "What is feature selection", k=200,)PreviousNucliaDBNextPostgres EmbeddingInstallationsimilarity_search using Approximate k-NNsimilarity_search using Script Scoringsimilarity_search using Painless ScriptingMaximum marginal relevance search (MMR)Using a preexisting OpenSearch instanceUsing AOSS (Amazon OpenSearch Service Serverless)Using AOS (Amazon | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata",)Using AOSS (Amazon OpenSearch Service Serverless)‚Äã# This is just an example to show how to use AOSS with faiss engine and efficient_filter, you need to set proper values.service = 'aoss' # must set the service as 'aoss'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index-using-aoss", engine="faiss",)docs = docsearch.similarity_search( "What is feature selection", efficient_filter=filter, k=200,)Using AOS (Amazon OpenSearch Service)‚Äã# This is just an example to show how to use AOS , you need to set proper values.service = 'es' # must set the service as 'es'region = 'us-east-2'credentials = boto3.Session(aws_access_key_id='xxxxxx',aws_secret_access_key='xxxxx').get_credentials()awsauth = AWS4Auth('xxxxx', 'xxxxxx', region,service, session_token=credentials.token)docsearch = OpenSearchVectorSearch.from_documents( docs, embeddings, opensearch_url="host url", http_auth=awsauth, timeout = 300, use_ssl = True, verify_certs = True, connection_class = RequestsHttpConnection, index_name="test-index",)docs = docsearch.similarity_search( "What is feature selection", k=200,)PreviousNucliaDBNextPostgres EmbeddingInstallationsimilarity_search using Approximate k-NNsimilarity_search using Script Scoringsimilarity_search using Painless ScriptingMaximum marginal relevance search (MMR)Using a preexisting OpenSearch instanceUsing AOSS (Amazon OpenSearch Service Serverless)Using AOS (Amazon |
872 | OpenSearch Service Serverless)Using AOS (Amazon OpenSearch Service)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. | OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. ->: OpenSearch Service Serverless)Using AOS (Amazon OpenSearch Service)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
873 | DocArray InMemorySearch | ü¶úÔ∏èüîó Langchain | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. ->: DocArray InMemorySearch | ü¶úÔ∏èüîó Langchain |
874 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesDocArray InMemorySearchOn this pageDocArray InMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.This notebook shows how to use functionality related to the DocArrayInMemorySearch.Setup‚ÄãUncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayInMemorySearch‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayInMemorySearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../modules/state_of_the_union.txt").load()text_splitter = | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesDocArray InMemorySearchOn this pageDocArray InMemorySearchDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.This notebook shows how to use functionality related to the DocArrayInMemorySearch.Setup‚ÄãUncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayInMemorySearch‚Äãfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayInMemorySearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../modules/state_of_the_union.txt").load()text_splitter = |
875 | = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayInMemorySearch.from_documents(docs, embeddings)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. ->: = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayInMemorySearch.from_documents(docs, embeddings)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I |
876 | Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.8154190158347903)PreviousDocArray HnswSearchNextElasticsearchSetupUsing DocArrayInMemorySearchSimilarity searchSimilarity search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. | DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server. ->: Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.8154190158347903)PreviousDocArray HnswSearchNextElasticsearchSetupUsing DocArrayInMemorySearchSimilarity searchSimilarity search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
877 | Typesense | ü¶úÔ∏èüîó Langchain | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. ->: Typesense | ü¶úÔ∏èüîó Langchain |
878 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTypesenseOn this pageTypesenseTypesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud.Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.This notebook shows you how to use Typesense as your VectorStore.Let's first install our dependencies:pip install typesense openapi-schema-pydantic openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Typesensefrom langchain.document_loaders import TextLoaderLet's import | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesTypesenseOn this pageTypesenseTypesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud.Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.This notebook shows you how to use Typesense as your VectorStore.Let's first install our dependencies:pip install typesense openapi-schema-pydantic openai tiktokenWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Typesensefrom langchain.document_loaders import TextLoaderLet's import |
879 | import TextLoaderLet's import our test dataset:loader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Typesense.from_documents( docs, embeddings, typesense_client_params={ "host": "localhost", # Use xxx.a1.typesense.net for Typesense Cloud "port": "8108", # Use 443 for Typesense Cloud "protocol": "http", # Use https for Typesense Cloud "typesense_api_key": "xyz", "typesense_collection_name": "lang-chain", },)Similarity Search​query = "What did the president say about Ketanji Brown Jackson"found_docs = docsearch.similarity_search(query)print(found_docs[0].page_content)Typesense as a Retriever​Typesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.retriever = docsearch.as_retriever()retrieverquery = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0]PreviousTimescale Vector (Postgres)NextUSearchSimilarity SearchTypesense as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. | Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. ->: import TextLoaderLet's import our test dataset:loader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Typesense.from_documents( docs, embeddings, typesense_client_params={ "host": "localhost", # Use xxx.a1.typesense.net for Typesense Cloud "port": "8108", # Use 443 for Typesense Cloud "protocol": "http", # Use https for Typesense Cloud "typesense_api_key": "xyz", "typesense_collection_name": "lang-chain", },)Similarity Search​query = "What did the president say about Ketanji Brown Jackson"found_docs = docsearch.similarity_search(query)print(found_docs[0].page_content)Typesense as a Retriever​Typesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.retriever = docsearch.as_retriever()retrieverquery = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0]PreviousTimescale Vector (Postgres)NextUSearchSimilarity SearchTypesense as a RetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
880 | Pinecone | ü¶úÔ∏èüîó Langchain | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: Pinecone | ü¶úÔ∏èüîó Langchain |
881 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesPineconeOn this pagePineconePinecone is a vector database with broad functionality.This notebook shows how to use functionality related to the Pinecone vector database.To use Pinecone, you must have an API key. | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesPineconeOn this pagePineconePinecone is a vector database with broad functionality.This notebook shows how to use functionality related to the Pinecone vector database.To use Pinecone, you must have an API key. |
882 | Here are the installation instructions.pip install pinecone-client openai tiktoken langchainimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")os.environ["PINECONE_ENV"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Pineconefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()import pinecone# initialize pineconepinecone.init( api_key=os.getenv("PINECONE_API_KEY"), # find at app.pinecone.io environment=os.getenv("PINECONE_ENV"), # next to api key in console)index_name = "langchain-demo"# First, check if our index already exists. If it doesn't, we create itif index_name not in pinecone.list_indexes(): # we create a new index pinecone.create_index( name=index_name, metric='cosine', dimension=1536 )# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)# if you already have an index, you can load it like this# docsearch = Pinecone.from_existing_index(index_name, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing Index‚ÄãMore text can embedded and upserted to an existing Pinecone index using the add_texts functionindex = pinecone.Index("langchain-demo")vectorstore = Pinecone(index, embeddings.embed_query, "text")vectorstore.add_texts("More | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: Here are the installation instructions.pip install pinecone-client openai tiktoken langchainimport osimport getpassos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")os.environ["PINECONE_ENV"] = getpass.getpass("Pinecone Environment:")We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Pineconefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()import pinecone# initialize pineconepinecone.init( api_key=os.getenv("PINECONE_API_KEY"), # find at app.pinecone.io environment=os.getenv("PINECONE_ENV"), # next to api key in console)index_name = "langchain-demo"# First, check if our index already exists. If it doesn't, we create itif index_name not in pinecone.list_indexes(): # we create a new index pinecone.create_index( name=index_name, metric='cosine', dimension=1536 )# The OpenAI embedding model `text-embedding-ada-002 uses 1536 dimensions`docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)# if you already have an index, you can load it like this# docsearch = Pinecone.from_existing_index(index_name, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Adding More Text to an Existing Index‚ÄãMore text can embedded and upserted to an existing Pinecone index using the add_texts functionindex = pinecone.Index("langchain-demo")vectorstore = Pinecone(index, embeddings.embed_query, "text")vectorstore.add_texts("More |
883 | "text")vectorstore.add_texts("More text!")Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousPGVectorNextQdrantAdding More Text to an Existing IndexMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Pinecone is a vector database with broad functionality. | Pinecone is a vector database with broad functionality. ->: "text")vectorstore.add_texts("More text!")Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousPGVectorNextQdrantAdding More Text to an Existing IndexMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
884 | AnalyticDB | ü¶úÔ∏èüîó Langchain | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. ->: AnalyticDB | ü¶úÔ∏èüîó Langchain |
885 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAnalyticDBAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open-source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesAnalyticDBAnalyticDBAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.AnalyticDB for PostgreSQL is developed based on the open-source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.This notebook shows how to use functionality related to the AnalyticDB vector database. |
886 | To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AnalyticDBSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to AnalyticDB by setting related ENVIRONMENTS.export PG_HOST={your_analyticdb_hostname}export PG_PORT={your_analyticdb_port} # Optional, default is 5432export PG_DATABASE={your_database} # Optional, default is postgresexport PG_USER={database_username}export PG_PASSWORD={database_password}Then store your embeddings and documents into AnalyticDBimport osconnection_string = AnalyticDB.connection_string_from_db_params( driver=os.environ.get("PG_DRIVER", "psycopg2cffi"), host=os.environ.get("PG_HOST", "localhost"), port=int(os.environ.get("PG_PORT", "5432")), database=os.environ.get("PG_DATABASE", "postgres"), user=os.environ.get("PG_USER", "postgres"), password=os.environ.get("PG_PASSWORD", "postgres"),)vector_db = AnalyticDB.from_documents( docs, embeddings, connection_string=connection_string,)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. ->: To run, you should have an AnalyticDB instance up and running:Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import AnalyticDBSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to AnalyticDB by setting related ENVIRONMENTS.export PG_HOST={your_analyticdb_hostname}export PG_PORT={your_analyticdb_port} # Optional, default is 5432export PG_DATABASE={your_database} # Optional, default is postgresexport PG_USER={database_username}export PG_PASSWORD={database_password}Then store your embeddings and documents into AnalyticDBimport osconnection_string = AnalyticDB.connection_string_from_db_params( driver=os.environ.get("PG_DRIVER", "psycopg2cffi"), host=os.environ.get("PG_HOST", "localhost"), port=int(os.environ.get("PG_PORT", "5432")), database=os.environ.get("PG_DATABASE", "postgres"), user=os.environ.get("PG_USER", "postgres"), password=os.environ.get("PG_PASSWORD", "postgres"),)vector_db = AnalyticDB.from_documents( docs, embeddings, connection_string=connection_string,)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring |
887 | veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousAlibaba Cloud OpenSearchNextAnnoyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. | AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. ->: veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousAlibaba Cloud OpenSearchNextAnnoyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
888 | USearch | ü¶úÔ∏èüîó Langchain | USearch is a Smaller & Faster Single-File Vector Search Engine | USearch is a Smaller & Faster Single-File Vector Search Engine ->: USearch | ü¶úÔ∏èüîó Langchain |
889 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesUSearchOn this pageUSearchUSearch is a Smaller & Faster Single-File Vector Search EngineUSearch's base functionality is identical to FAISS, and the interface should look familiar if you have ever investigated Approximate Nearest Neigbors search. FAISS is a widely recognized standard for high-performance vector search engines. USearch and FAISS both employ the same HNSW algorithm, but they differ significantly in their design principles. USearch is compact and broadly compatible without sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.pip install usearchWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import USearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import | USearch is a Smaller & Faster Single-File Vector Search Engine | USearch is a Smaller & Faster Single-File Vector Search Engine ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesUSearchOn this pageUSearchUSearch is a Smaller & Faster Single-File Vector Search EngineUSearch's base functionality is identical to FAISS, and the interface should look familiar if you have ever investigated Approximate Nearest Neigbors search. FAISS is a widely recognized standard for high-performance vector search engines. USearch and FAISS both employ the same HNSW algorithm, but they differ significantly in their design principles. USearch is compact and broadly compatible without sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.pip install usearchWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import USearchfrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import |
890 | TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = USearch.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity Search with score​The similarity_search_with_score method allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, | USearch is a Smaller & Faster Single-File Vector Search Engine | USearch is a Smaller & Faster Single-File Vector Search Engine ->: TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../extras/modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = USearch.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity Search with score​The similarity_search_with_score method allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.docs_and_scores = db.similarity_search_with_score(query)docs_and_scores[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, |
891 | Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../extras/modules/state_of_the_union.txt'}), 0.1845687)PreviousTypesenseNextValdSimilarity Search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | USearch is a Smaller & Faster Single-File Vector Search Engine | USearch is a Smaller & Faster Single-File Vector Search Engine ->: Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../extras/modules/state_of_the_union.txt'}), 0.1845687)PreviousTypesenseNextValdSimilarity Search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
892 | Qdrant | ü¶úÔ∏èüîó Langchain | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: Qdrant | ü¶úÔ∏èüîó Langchain |
893 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesQdrantOn this pageQdrantQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:Local mode, no server requiredOn-premise server deploymentQdrant CloudSee the installation instructions.pip install qdrant-clientWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesActiveloop Deep LakeAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cosmos DBAzure Cognitive SearchBagelDBCassandraChromaClarifaiClickHouseDashVectorDingoDocArray HnswSearchDocArray InMemorySearchElasticsearchEpsillaFaissHologresLanceDBLLMRailsMarqoGoogle Vertex AI MatchingEngineMeilisearchMilvusMomento Vector Index (MVI)MongoDB AtlasMyScaleNeo4j Vector IndexNucliaDBOpenSearchPostgres EmbeddingPGVectorPineconeQdrantRedisRocksetScaNNSingleStoreDBscikit-learnsqlite-vssStarRocksSupabase (Postgres)TairTencent Cloud VectorDBTigrisTimescale Vector (Postgres)TypesenseUSearchValdvearchVectaravectorstoresVespaWeaviateXataZepZillizRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsVector storesQdrantOn this pageQdrantQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include:Local mode, no server requiredOn-premise server deploymentQdrant CloudSee the installation instructions.pip install qdrant-clientWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.embeddings.openai import OpenAIEmbeddingsfrom |
894 | import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Qdrantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connecting to Qdrant from LangChain‚ÄãLocal mode‚ÄãPython client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.In-memory‚ÄãFor some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)On-disk storage‚ÄãLocal mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs.qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents",)On-premise server deployment‚ÄãNo matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents",)Qdrant Cloud‚ÄãIf you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Qdrantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connecting to Qdrant from LangChain‚ÄãLocal mode‚ÄãPython client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.In-memory‚ÄãFor some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook.qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)On-disk storage‚ÄãLocal mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs.qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents",)On-premise server deployment‚ÄãNo matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents",)Qdrant Cloud‚ÄãIf you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for |
895 | There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly.url = "<---qdrant cloud cluster url here --->"api_key = "<---api key here--->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, api_key=api_key, collection_name="my_documents",)Recreating the collection​Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting force_recreate to True allows to remove the old collection and start from scratch.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents", force_recreate=True,)Similarity search​The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search(query)print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly.url = "<---qdrant cloud cluster url here --->"api_key = "<---api key here--->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, api_key=api_key, collection_name="my_documents",)Recreating the collection​Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting force_recreate to True allows to remove the old collection and start from scratch.url = "<---qdrant url here --->"qdrant = Qdrant.from_documents( docs, embeddings, url=url, prefer_grpc=True, collection_name="my_documents", force_recreate=True,)Similarity search​The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search(query)print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United |
896 | has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. |
897 | The returned distance score is cosine distance. Therefore, a lower score is better.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.8153784913324512Metadata filtering​Qdrant has an extensive filtering system with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the similarity_search_with_score and similarity_search methods.from qdrant_client.http import models as restquery = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))Maximum marginal relevance search (MMR)​If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: The returned distance score is cosine distance. Therefore, a lower score is better.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query)document, score = found_docs[0]print(document.page_content)print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.8153784913324512Metadata filtering​Qdrant has an extensive filtering system with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the similarity_search_with_score and similarity_search methods.from qdrant_client.http import models as restquery = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))Maximum marginal relevance search (MMR)​If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.query = "What did the president say about Ketanji Brown Jackson"found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in |
898 | k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n") 1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. Qdrant as a Retriever​Qdrant, as all the other vector stores, is a | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n") 1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. Qdrant as a Retriever​Qdrant, as all the other vector stores, is a |
899 | as all the other vector stores, is a LangChain Retriever, by using cosine similarity. retriever = qdrant.as_retriever()retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={})It might be also specified to use MMR as a search strategy, instead of similarity.retriever = qdrant.as_retriever(search_type="mmr")retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Customizing Qdrant​There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain Document.Named vectors​Qdrant supports multiple vectors per point by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. | Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. ->: as all the other vector stores, is a LangChain Retriever, by using cosine similarity. retriever = qdrant.as_retriever()retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={})It might be also specified to use MMR as a search strategy, instead of similarity.retriever = qdrant.as_retriever(search_type="mmr")retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})query = "What did the president say about Ketanji Brown Jackson"retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})Customizing Qdrant​There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain Document.Named vectors​Qdrant supports multiple vectors per point by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.