Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
3,500 | Modal | ü¶úÔ∏èüîó Langchain | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. ->: Modal | ü¶úÔ∏èüîó Langchain |
3,501 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,502 | and toolkitsMemoryCallbacksChat loadersProvidersMoreModalOn this pageModalThis page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreModalOn this pageModalThis page covers how to use the Modal ecosystem to run LangChain custom LLMs. |
3,503 | It is broken into two parts: Modal installation and web endpoint deploymentUsing deployed web endpoint with LLM wrapper class.Installation and Setup‚ÄãInstall with pip install modalRun modal token newDefine your Modal Functions and Webhooks‚ÄãYou must include a prompt. There is a rigid response structure:class Item(BaseModel): prompt: [email protected]()@modal.web_endpoint(method="POST")def get_text(item: Item): return {"prompt": run_gpt2.call(item.prompt)}The following is an example with the GPT2 model:from pydantic import BaseModelimport modalCACHE_PATH = "/root/model_cache"class Item(BaseModel): prompt: strstub = modal.Stub(name="example-get-started-with-langchain")def download_model(): from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer.save_pretrained(CACHE_PATH) model.save_pretrained(CACHE_PATH)# Define a container image for the LLM function below, which# downloads and stores the GPT-2 model.image = modal.Image.debian_slim().pip_install( "tokenizers", "transformers", "torch", "accelerate").run_function(download_model)@stub.function( gpu="any", image=image, retries=3,)def run_gpt2(text: str): from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(CACHE_PATH) model = GPT2LMHeadModel.from_pretrained(CACHE_PATH) encoded_input = tokenizer(text, return_tensors='pt').input_ids output = model.generate(encoded_input, max_length=50, do_sample=True) return tokenizer.decode(output[0], skip_special_tokens=True)@stub.function()@modal.web_endpoint(method="POST")def get_text(item: Item): return {"prompt": run_gpt2.call(item.prompt)}Deploy the web endpoint‚ÄãDeploy the web endpoint to Modal cloud with the modal deploy CLI command. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. ->: It is broken into two parts: Modal installation and web endpoint deploymentUsing deployed web endpoint with LLM wrapper class.Installation and Setup‚ÄãInstall with pip install modalRun modal token newDefine your Modal Functions and Webhooks‚ÄãYou must include a prompt. There is a rigid response structure:class Item(BaseModel): prompt: [email protected]()@modal.web_endpoint(method="POST")def get_text(item: Item): return {"prompt": run_gpt2.call(item.prompt)}The following is an example with the GPT2 model:from pydantic import BaseModelimport modalCACHE_PATH = "/root/model_cache"class Item(BaseModel): prompt: strstub = modal.Stub(name="example-get-started-with-langchain")def download_model(): from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer.save_pretrained(CACHE_PATH) model.save_pretrained(CACHE_PATH)# Define a container image for the LLM function below, which# downloads and stores the GPT-2 model.image = modal.Image.debian_slim().pip_install( "tokenizers", "transformers", "torch", "accelerate").run_function(download_model)@stub.function( gpu="any", image=image, retries=3,)def run_gpt2(text: str): from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(CACHE_PATH) model = GPT2LMHeadModel.from_pretrained(CACHE_PATH) encoded_input = tokenizer(text, return_tensors='pt').input_ids output = model.generate(encoded_input, max_length=50, do_sample=True) return tokenizer.decode(output[0], skip_special_tokens=True)@stub.function()@modal.web_endpoint(method="POST")def get_text(item: Item): return {"prompt": run_gpt2.call(item.prompt)}Deploy the web endpoint‚ÄãDeploy the web endpoint to Modal cloud with the modal deploy CLI command. |
3,504 | Your web endpoint will acquire a persistent URL under the modal.run domain.LLM wrapper around Modal web endpoint​The Modal LLM wrapper class which will accept your deployed web endpoint's URL.from langchain.llms import Modalendpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URLllm = Modal(endpoint_url=endpoint_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousMLflowNextModelScopeInstallation and SetupDefine your Modal Functions and WebhooksDeploy the web endpointLLM wrapper around Modal web endpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. | This page covers how to use the Modal ecosystem to run LangChain custom LLMs. ->: Your web endpoint will acquire a persistent URL under the modal.run domain.LLM wrapper around Modal web endpoint​The Modal LLM wrapper class which will accept your deployed web endpoint's URL.from langchain.llms import Modalendpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URLllm = Modal(endpoint_url=endpoint_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousMLflowNextModelScopeInstallation and SetupDefine your Modal Functions and WebhooksDeploy the web endpointLLM wrapper around Modal web endpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,505 | Weather | ü¶úÔ∏èüîó Langchain | OpenWeatherMap is an open-source weather service provider. | OpenWeatherMap is an open-source weather service provider. ->: Weather | ü¶úÔ∏èüîó Langchain |
3,506 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | OpenWeatherMap is an open-source weather service provider. | OpenWeatherMap is an open-source weather service provider. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,507 | and toolkitsMemoryCallbacksChat loadersProvidersMoreWeatherOn this pageWeatherOpenWeatherMap is an open-source weather service provider.Installation and Setup​pip install pyowmWe must set up the OpenWeatherMap API token.Document Loader​See a usage example.from langchain.document_loaders import WeatherDataLoaderPreviousWeights & BiasesNextWeaviateInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | OpenWeatherMap is an open-source weather service provider. | OpenWeatherMap is an open-source weather service provider. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreWeatherOn this pageWeatherOpenWeatherMap is an open-source weather service provider.Installation and Setup​pip install pyowmWe must set up the OpenWeatherMap API token.Document Loader​See a usage example.from langchain.document_loaders import WeatherDataLoaderPreviousWeights & BiasesNextWeaviateInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,508 | Vectara | ü¶úÔ∏èüîó Langchain | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: Vectara | ü¶úÔ∏èüîó Langchain |
3,509 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraChat Over Documents with VectaraVectara Text GenerationVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraChat Over Documents with VectaraVectara Text GenerationVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector |
3,510 | transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersMoreVectaraOn this pageVectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersMoreVectaraOn this pageVectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation |
3,511 | (aka Retrieval-augmented-generation or RAG) applications.Vectara Overview:Vectara is developer-first API platform for building GenAI applicationsTo use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.You can use Vectara's indexing API to add documents into Vectara's indexYou can use Vectara's Search API to query Vectara's index (which also supports Hybrid search implicitly).You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.Installation and Setup‚ÄãTo use Vectara with LangChain no special installation steps are required.
To get started, sign up and follow our quickstart guide to create a corpus and an API key. | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: (aka Retrieval-augmented-generation or RAG) applications.Vectara Overview:Vectara is developer-first API platform for building GenAI applicationsTo use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.You can use Vectara's indexing API to add documents into Vectara's indexYou can use Vectara's Search API to query Vectara's index (which also supports Hybrid search implicitly).You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.Installation and Setup‚ÄãTo use Vectara with LangChain no special installation steps are required.
To get started, sign up and follow our quickstart guide to create a corpus and an API key. |
3,512 | Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.export VECTARA_CUSTOMER_ID="your_customer_id"export VECTARA_CORPUS_ID="your_corpus_id"export VECTARA_API_KEY="your-vectara-api-key"Vector Store‚ÄãThere exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import VectaraTo create an instance of the Vectara vectorstore:vectara = Vectara( vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key)The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.After you have the vectorstore, you can add_texts or add_documents as per the standard VectorStore interface, for example:vectara.add_texts(["to be or not to be", "that is the question"])Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.As an example:vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:results = vectara.similarity_score("what is LangChain?")similarity_search_with_score also supports the following additional arguments:k: number of results to return (defaults to 5)lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)filter: a filter to apply to the results (default None)n_sentence_context: number of sentences to include before/after the actual matching segment | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.export VECTARA_CUSTOMER_ID="your_customer_id"export VECTARA_CORPUS_ID="your_corpus_id"export VECTARA_API_KEY="your-vectara-api-key"Vector Store‚ÄãThere exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import VectaraTo create an instance of the Vectara vectorstore:vectara = Vectara( vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key)The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.After you have the vectorstore, you can add_texts or add_documents as per the standard VectorStore interface, for example:vectara.add_texts(["to be or not to be", "that is the question"])Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.As an example:vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:results = vectara.similarity_score("what is LangChain?")similarity_search_with_score also supports the following additional arguments:k: number of results to return (defaults to 5)lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)filter: a filter to apply to the results (default None)n_sentence_context: number of sentences to include before/after the actual matching segment |
3,513 | include before/after the actual matching segment when returning results. This defaults to 2.The results are returned as a list of relevant documents, and a relevance score of each document.For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:Chat Over Documents with VectaraVectara Text GenerationPreviousVearchNextChat Over Documents with VectaraInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation | Vectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation ->: include before/after the actual matching segment when returning results. This defaults to 2.The results are returned as a list of relevant documents, and a relevance score of each document.For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:Chat Over Documents with VectaraVectara Text GenerationPreviousVearchNextChat Over Documents with VectaraInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,514 | Comet | ü¶úÔ∏èüîó Langchain | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: Comet | ü¶úÔ∏èüîó Langchain |
3,515 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,516 | and toolkitsMemoryCallbacksChat loadersProvidersMoreCometOn this pageCometIn this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. Example Project: Comet with LangChainInstall Comet and Dependencies‚Äãimport sys{sys.executable} -m spacy download en_core_web_smInitialize Comet and Set your Credentials‚ÄãYou can grab your Comet API Key here or click the link after initializing Cometimport comet_mlcomet_ml.init(project_name="comet-example-langchain")Set OpenAI and SerpAPI credentials‚ÄãYou will need an OpenAI API Key and a SerpAPI API Key to run the following examplesimport osos.environ["OPENAI_API_KEY"] = "..."# os.environ["OPENAI_ORGANIZATION"] = "..."os.environ["SERPAPI_API_KEY"] = "..."Scenario 1: Using just an LLM‚Äãfrom datetime import datetimefrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["llm"], visualizations=["dep"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)print("LLM result", llm_result)comet_callback.flush_tracker(llm, finish=True)Scenario 2: Using an LLM in a Chain‚Äãfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatecomet_callback = CometCallbackHandler( complexity_metrics=True, project_name="comet-example-langchain", stream_logs=True, tags=["synopsis-chain"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreCometOn this pageCometIn this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. Example Project: Comet with LangChainInstall Comet and Dependencies‚Äãimport sys{sys.executable} -m spacy download en_core_web_smInitialize Comet and Set your Credentials‚ÄãYou can grab your Comet API Key here or click the link after initializing Cometimport comet_mlcomet_ml.init(project_name="comet-example-langchain")Set OpenAI and SerpAPI credentials‚ÄãYou will need an OpenAI API Key and a SerpAPI API Key to run the following examplesimport osos.environ["OPENAI_API_KEY"] = "..."# os.environ["OPENAI_ORGANIZATION"] = "..."os.environ["SERPAPI_API_KEY"] = "..."Scenario 1: Using just an LLM‚Äãfrom datetime import datetimefrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["llm"], visualizations=["dep"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)print("LLM result", llm_result)comet_callback.flush_tracker(llm, finish=True)Scenario 2: Using an LLM in a Chain‚Äãfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatecomet_callback = CometCallbackHandler( complexity_metrics=True, project_name="comet-example-langchain", stream_logs=True, tags=["synopsis-chain"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: |
3,517 | job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]print(synopsis_chain.apply(test_prompts))comet_callback.flush_tracker(synopsis_chain, finish=True)Scenario 3: Using An Agent with Tools‚Äãfrom langchain.agents import initialize_agent, load_toolsfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["agent"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent="zero-shot-react-description", callbacks=callbacks, verbose=True,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")comet_callback.flush_tracker(agent, finish=True)Scenario 4: Using Custom Evaluation Metrics‚ÄãThe CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let's take a look at how this works. In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt. %pip install rouge-scorefrom rouge_score import rouge_scorerfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplateclass Rouge: def __init__(self, reference): self.reference = reference self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True) def compute_metric(self, | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]print(synopsis_chain.apply(test_prompts))comet_callback.flush_tracker(synopsis_chain, finish=True)Scenario 3: Using An Agent with Tools‚Äãfrom langchain.agents import initialize_agent, load_toolsfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIcomet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=True, stream_logs=True, tags=["agent"],)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent="zero-shot-react-description", callbacks=callbacks, verbose=True,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")comet_callback.flush_tracker(agent, finish=True)Scenario 4: Using Custom Evaluation Metrics‚ÄãThe CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let's take a look at how this works. In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt. %pip install rouge-scorefrom rouge_score import rouge_scorerfrom langchain.callbacks import CometCallbackHandler, StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplateclass Rouge: def __init__(self, reference): self.reference = reference self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True) def compute_metric(self, |
3,518 | use_stemmer=True) def compute_metric(self, generation, prompt_idx, gen_idx): prediction = generation.text results = self.scorer.score(target=self.reference, prediction=prediction) return { "rougeLsum_score": results["rougeLsum"].fmeasure, "reference": self.reference, }reference = """The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.It was the first structure to reach a height of 300 metres.It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France ."""rouge_score = Rouge(reference=reference)template = """Given the following article, it is your job to write a summary.Article:{article}Summary: This is the summary for the above article:"""prompt_template = PromptTemplate(input_variables=["article"], template=template)comet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=False, stream_logs=True, tags=["custom_metrics"], custom_metrics=rouge_score.compute_metric,)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)test_prompts = [ { "article": """ The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: use_stemmer=True) def compute_metric(self, generation, prompt_idx, gen_idx): prediction = generation.text results = self.scorer.score(target=self.reference, prediction=prediction) return { "rougeLsum_score": results["rougeLsum"].fmeasure, "reference": self.reference, }reference = """The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.It was the first structure to reach a height of 300 metres.It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France ."""rouge_score = Rouge(reference=reference)template = """Given the following article, it is your job to write a summary.Article:{article}Summary: This is the summary for the above article:"""prompt_template = PromptTemplate(input_variables=["article"], template=template)comet_callback = CometCallbackHandler( project_name="comet-example-langchain", complexity_metrics=False, stream_logs=True, tags=["custom_metrics"], custom_metrics=rouge_score.compute_metric,)callbacks = [StdOutCallbackHandler(), comet_callback]llm = OpenAI(temperature=0.9)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)test_prompts = [ { "article": """ The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the |
3,519 | to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. """ }]print(synopsis_chain.apply(test_prompts, callbacks=callbacks))comet_callback.flush_tracker(synopsis_chain, finish=True)PreviousCollege ConfidentialNextConfident AIInstall Comet and DependenciesInitialize Comet and Set your CredentialsSet OpenAI and SerpAPI credentialsScenario 1: Using just an LLMScenario 2: Using an LLM in a ChainScenario 3: Using An Agent with ToolsScenario 4: Using Custom Evaluation MetricsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. | In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet. ->: to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. """ }]print(synopsis_chain.apply(test_prompts, callbacks=callbacks))comet_callback.flush_tracker(synopsis_chain, finish=True)PreviousCollege ConfidentialNextConfident AIInstall Comet and DependenciesInitialize Comet and Set your CredentialsSet OpenAI and SerpAPI credentialsScenario 1: Using just an LLMScenario 2: Using an LLM in a ChainScenario 3: Using An Agent with ToolsScenario 4: Using Custom Evaluation MetricsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,520 | Rockset | ü¶úÔ∏èüîó Langchain | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. ->: Rockset | ü¶úÔ∏èüîó Langchain |
3,521 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index‚Ñ¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,522 | and toolkitsMemoryCallbacksChat loadersProvidersMoreRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. Installation and Setup​Make sure you have Rockset account and go to the web console to get the API key. Details can be found on the website.pip install rocksetVector Store​See a usage example.from langchain.vectorstores import Rockset Document Loader​See a usage example.from langchain.document_loaders import RocksetLoaderChat Message History​See a usage example.from langchain.memory.chat_message_histories import RocksetChatMessageHistoryPreviousRoamNextRunhouseInstallation and SetupVector StoreDocument LoaderChat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. | Rockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. Installation and Setup​Make sure you have Rockset account and go to the web console to get the API key. Details can be found on the website.pip install rocksetVector Store​See a usage example.from langchain.vectorstores import Rockset Document Loader​See a usage example.from langchain.document_loaders import RocksetLoaderChat Message History​See a usage example.from langchain.memory.chat_message_histories import RocksetChatMessageHistoryPreviousRoamNextRunhouseInstallation and SetupVector StoreDocument LoaderChat Message HistoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,523 | Infino | ü¶úÔ∏èüîó Langchain | Infino is an open-source observability platform that stores both metrics and application logs together. | Infino is an open-source observability platform that stores both metrics and application logs together. ->: Infino | ü¶úÔ∏èüîó Langchain |
3,524 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Infino is an open-source observability platform that stores both metrics and application logs together. | Infino is an open-source observability platform that stores both metrics and application logs together. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,525 | and toolkitsMemoryCallbacksChat loadersProvidersMoreInfinoOn this pageInfinoInfino is an open-source observability platform that stores both metrics and application logs together.Key features of Infino include:Metrics Tracking: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM.Data Tracking: Log and store prompt, request, and response data for each LangChain interaction.Graph Visualization: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost.Installation and Setup‚ÄãFirst, you'll need to install the infinopy Python package as follows:pip install infinopyIf you already have an Infino Server running, then you're good to go; but if | Infino is an open-source observability platform that stores both metrics and application logs together. | Infino is an open-source observability platform that stores both metrics and application logs together. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreInfinoOn this pageInfinoInfino is an open-source observability platform that stores both metrics and application logs together.Key features of Infino include:Metrics Tracking: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM.Data Tracking: Log and store prompt, request, and response data for each LangChain interaction.Graph Visualization: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost.Installation and Setup‚ÄãFirst, you'll need to install the infinopy Python package as follows:pip install infinopyIf you already have an Infino Server running, then you're good to go; but if |
3,526 | you don't, follow the next steps to start it:Make sure you have Docker installedRun the following in your terminal:docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latestUsing Infino​See a usage example of InfinoCallbackHandler.from langchain.callbacks import InfinoCallbackHandlerPreviousIMSDbNextJavelin AI GatewayInstallation and SetupUsing InfinoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Infino is an open-source observability platform that stores both metrics and application logs together. | Infino is an open-source observability platform that stores both metrics and application logs together. ->: you don't, follow the next steps to start it:Make sure you have Docker installedRun the following in your terminal:docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latestUsing Infino​See a usage example of InfinoCallbackHandler.from langchain.callbacks import InfinoCallbackHandlerPreviousIMSDbNextJavelin AI GatewayInstallation and SetupUsing InfinoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,527 | ClickHouse | ü¶úÔ∏èüîó Langchain | ClickHouse is the fast and resource efficient open-source database for real-time | ClickHouse is the fast and resource efficient open-source database for real-time ->: ClickHouse | ü¶úÔ∏èüîó Langchain |
3,528 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | ClickHouse is the fast and resource efficient open-source database for real-time | ClickHouse is the fast and resource efficient open-source database for real-time ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,529 | and toolkitsMemoryCallbacksChat loadersProvidersMoreClickHouseOn this pageClickHouseClickHouse is the fast and resource efficient open-source database for real-time | ClickHouse is the fast and resource efficient open-source database for real-time | ClickHouse is the fast and resource efficient open-source database for real-time ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreClickHouseOn this pageClickHouseClickHouse is the fast and resource efficient open-source database for real-time |
3,530 | apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries.
It has data structures and distance search functions (like L2Distance) as well as
approximate nearest neighbor search indexes
That enables ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.Installation and Setup​We need to install clickhouse-connect python package.pip install clickhouse-connectVector Store​See a usage example.from langchain.vectorstores import Clickhouse, ClickhouseSettingsPreviousClearMLNextCnosDBInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | ClickHouse is the fast and resource efficient open-source database for real-time | ClickHouse is the fast and resource efficient open-source database for real-time ->: apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries.
It has data structures and distance search functions (like L2Distance) as well as
approximate nearest neighbor search indexes
That enables ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.Installation and Setup​We need to install clickhouse-connect python package.pip install clickhouse-connectVector Store​See a usage example.from langchain.vectorstores import Clickhouse, ClickhouseSettingsPreviousClearMLNextCnosDBInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,531 | MongoDB Atlas | ü¶úÔ∏èüîó Langchain | MongoDB Atlas is a fully-managed cloud | MongoDB Atlas is a fully-managed cloud ->: MongoDB Atlas | ü¶úÔ∏èüîó Langchain |
3,532 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | MongoDB Atlas is a fully-managed cloud | MongoDB Atlas is a fully-managed cloud ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,533 | and toolkitsMemoryCallbacksChat loadersProvidersMoreMongoDB AtlasOn this pageMongoDB AtlasMongoDB Atlas is a fully-managed cloud | MongoDB Atlas is a fully-managed cloud | MongoDB Atlas is a fully-managed cloud ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreMongoDB AtlasOn this pageMongoDB AtlasMongoDB Atlas is a fully-managed cloud |
3,534 | database available in AWS, Azure, and GCP. It now has support for native
Vector Search on the MongoDB document data.Installation and Setup​See detail configuration instructions.We need to install pymongo python package.pip install pymongoVector Store​See a usage example.from langchain.vectorstores import MongoDBAtlasVectorSearchPreviousMomentoNextMotherduckInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | MongoDB Atlas is a fully-managed cloud | MongoDB Atlas is a fully-managed cloud ->: database available in AWS, Azure, and GCP. It now has support for native
Vector Search on the MongoDB document data.Installation and Setup​See detail configuration instructions.We need to install pymongo python package.pip install pymongoVector Store​See a usage example.from langchain.vectorstores import MongoDBAtlasVectorSearchPreviousMomentoNextMotherduckInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,535 | Facebook Faiss | ü¶úÔ∏èüîó Langchain | Facebook AI Similarity Search (Faiss) | Facebook AI Similarity Search (Faiss) ->: Facebook Faiss | ü¶úÔ∏èüîó Langchain |
3,536 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Facebook AI Similarity Search (Faiss) | Facebook AI Similarity Search (Faiss) ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,537 | and toolkitsMemoryCallbacksChat loadersProvidersMoreFacebook FaissOn this pageFacebook FaissFacebook AI Similarity Search (Faiss) | Facebook AI Similarity Search (Faiss) | Facebook AI Similarity Search (Faiss) ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreFacebook FaissOn this pageFacebook FaissFacebook AI Similarity Search (Faiss) |
3,538 | is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that
search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting
code for evaluation and parameter tuning.Faiss documentation.Installation and Setup​We need to install faiss python package.pip install faiss-gpu # For CUDA 7.5+ supported GPU's.ORpip install faiss-cpu # For CPU InstallationVector Store​See a usage example.from langchain.vectorstores import FAISSPreviousFacebook ChatNextFigmaInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Facebook AI Similarity Search (Faiss) | Facebook AI Similarity Search (Faiss) ->: is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that
search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting
code for evaluation and parameter tuning.Faiss documentation.Installation and Setup​We need to install faiss python package.pip install faiss-gpu # For CUDA 7.5+ supported GPU's.ORpip install faiss-cpu # For CPU InstallationVector Store​See a usage example.from langchain.vectorstores import FAISSPreviousFacebook ChatNextFigmaInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,539 | DeepSparse | ü¶úÔ∏èüîó Langchain | This page covers how to use the DeepSparse inference runtime within LangChain. | This page covers how to use the DeepSparse inference runtime within LangChain. ->: DeepSparse | ü¶úÔ∏èüîó Langchain |
3,540 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the DeepSparse inference runtime within LangChain. | This page covers how to use the DeepSparse inference runtime within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,541 | and toolkitsMemoryCallbacksChat loadersProvidersMoreDeepSparseOn this pageDeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain. | This page covers how to use the DeepSparse inference runtime within LangChain. | This page covers how to use the DeepSparse inference runtime within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreDeepSparseOn this pageDeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain. |
3,542 | It is broken into two parts: installation and setup, and then examples of DeepSparse usage.Installation and Setup​Install the Python package with pip install deepsparseChoose a SparseZoo model or export a support model to ONNX using OptimumWrappers​LLM​There exists a DeepSparse LLM wrapper, which you can access with:from langchain.llms import DeepSparseIt provides a unified interface for all models:llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')print(llm('def fib():'))Additional parameters can be passed using the config parameter:config = {'max_generated_tokens': 256}llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)PreviousDeepInfraNextDiffbotInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the DeepSparse inference runtime within LangChain. | This page covers how to use the DeepSparse inference runtime within LangChain. ->: It is broken into two parts: installation and setup, and then examples of DeepSparse usage.Installation and Setup​Install the Python package with pip install deepsparseChoose a SparseZoo model or export a support model to ONNX using OptimumWrappers​LLM​There exists a DeepSparse LLM wrapper, which you can access with:from langchain.llms import DeepSparseIt provides a unified interface for all models:llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')print(llm('def fib():'))Additional parameters can be passed using the config parameter:config = {'max_generated_tokens': 256}llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)PreviousDeepInfraNextDiffbotInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,543 | Chroma | ü¶úÔ∏èüîó Langchain | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Chroma | ü¶úÔ∏èüîó Langchain |
3,544 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,545 | and toolkitsMemoryCallbacksChat loadersProvidersMoreChromaOn this pageChromaChroma is a database for building AI applications with embeddings.Installation and Setup‚Äãpip install chromadbVectorStore‚ÄãThere exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore, | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreChromaOn this pageChromaChroma is a database for building AI applications with embeddings.Installation and Setup‚Äãpip install chromadbVectorStore‚ÄãThere exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore, |
3,546 | whether for semantic search or example selection.from langchain.vectorstores import ChromaFor a more detailed walkthrough of the Chroma wrapper, see this notebookRetriever​See a usage example.from langchain.retrievers import SelfQueryRetrieverPreviousChaindeskNextClarifaiInstallation and SetupVectorStoreRetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: whether for semantic search or example selection.from langchain.vectorstores import ChromaFor a more detailed walkthrough of the Chroma wrapper, see this notebookRetriever​See a usage example.from langchain.retrievers import SelfQueryRetrieverPreviousChaindeskNextClarifaiInstallation and SetupVectorStoreRetrieverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,547 | Chroma | ü¶úÔ∏èüîó Langchain | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Chroma | ü¶úÔ∏èüîó Langchain |
3,548 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverChromaOn this pageChromaChroma is a database for building AI applications with embeddings.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Chroma vector store. Creating a Chroma vector store‚ÄãFirst we'll want to create a Chroma vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.#!pip install lark#!pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromaembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversAmazon KendraArcee RetrieverArxivAzure Cognitive SearchBM25ChaindeskChatGPT PluginCohere RerankerDocArrayElasticSearch BM25Google Cloud Enterprise SearchGoogle DriveGoogle Vertex AI SearchKay.aikNNLOTR (Merger Retriever)MetalPinecone Hybrid SearchPubMedRePhraseQuerySEC filingSelf-querying retrieverDeep LakeChromaDashVectorElasticsearchMilvusMyScaleOpenSearchPineconeQdrantRedisSupabaseTimescale Vector (Postgres) self-queryingVectaraWeaviateSVMTavily Search APITF-IDFVespaWeaviate Hybrid SearchWikipediayou-retrieverZepToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsRetrieversSelf-querying retrieverChromaOn this pageChromaChroma is a database for building AI applications with embeddings.In the notebook, we'll demo the SelfQueryRetriever wrapped around a Chroma vector store. Creating a Chroma vector store‚ÄãFirst we'll want to create a Chroma vector store and seed it with some data. We've created a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package.#!pip install lark#!pip install chromadbWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key: ¬∑¬∑¬∑¬∑¬∑¬∑¬∑¬∑from langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromaembeddings = OpenAIEmbeddings()docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( |
3,549 | "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, embeddings) Using embedded DuckDB without persistence: data will be transientCreating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, embeddings) Using embedded DuckDB without persistence: data will be transientCreating our self-querying retriever‚ÄãNow we can instantiate our retriever. To do this we'll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.from langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain.chains.query_constructor.base import AttributeInfometadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", |
3,550 | of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)Testing it out‚ÄãAnd now we can try actually using our retriever!# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about |
3,551 | Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]Filter k‚ÄãWe can also use the self query retriever to specify k: the number of documents to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None |
3,552 | dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]PreviousDeep LakeNextDashVectorCreating a Chroma vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Chroma is a database for building AI applications with embeddings. | Chroma is a database for building AI applications with embeddings. ->: dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]PreviousDeep LakeNextDashVectorCreating a Chroma vector storeCreating our self-querying retrieverTesting it outFilter kCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,553 | DocArray | ü¶úÔ∏èüîó Langchain | DocArray is a library for nested, unstructured, multimodal data in transit, | DocArray is a library for nested, unstructured, multimodal data in transit, ->: DocArray | ü¶úÔ∏èüîó Langchain |
3,554 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | DocArray is a library for nested, unstructured, multimodal data in transit, | DocArray is a library for nested, unstructured, multimodal data in transit, ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,555 | and toolkitsMemoryCallbacksChat loadersProvidersMoreDocArrayOn this pageDocArrayDocArray is a library for nested, unstructured, multimodal data in transit, | DocArray is a library for nested, unstructured, multimodal data in transit, | DocArray is a library for nested, unstructured, multimodal data in transit, ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreDocArrayOn this pageDocArrayDocArray is a library for nested, unstructured, multimodal data in transit, |
3,556 | including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process,
embed, search, recommend, store, and transfer multimodal data with a Pythonic API.Installation and Setup​We need to install docarray python package.pip install docarrayVector Store​LangChain provides an access to the In-memory and HNSW vector stores from the DocArray library.See a usage example.from langchain.vectorstores DocArrayHnswSearchSee a usage example.from langchain.vectorstores DocArrayInMemorySearchPreviousDiscordNextDoctranInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | DocArray is a library for nested, unstructured, multimodal data in transit, | DocArray is a library for nested, unstructured, multimodal data in transit, ->: including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process,
embed, search, recommend, store, and transfer multimodal data with a Pythonic API.Installation and Setup​We need to install docarray python package.pip install docarrayVector Store​LangChain provides an access to the In-memory and HNSW vector stores from the DocArray library.See a usage example.from langchain.vectorstores DocArrayHnswSearchSee a usage example.from langchain.vectorstores DocArrayInMemorySearchPreviousDiscordNextDoctranInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,557 | RWKV-4 | ü¶úÔ∏èüîó Langchain | This page covers how to use the RWKV-4 wrapper within LangChain. | This page covers how to use the RWKV-4 wrapper within LangChain. ->: RWKV-4 | ü¶úÔ∏èüîó Langchain |
3,558 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the RWKV-4 wrapper within LangChain. | This page covers how to use the RWKV-4 wrapper within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,559 | and toolkitsMemoryCallbacksChat loadersProvidersMoreRWKV-4On this pageRWKV-4This page covers how to use the RWKV-4 wrapper within LangChain. | This page covers how to use the RWKV-4 wrapper within LangChain. | This page covers how to use the RWKV-4 wrapper within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreRWKV-4On this pageRWKV-4This page covers how to use the RWKV-4 wrapper within LangChain. |
3,560 | It is broken into two parts: installation and setup, and then usage with an example.Installation and Setup​Install the Python package with pip install rwkvInstall the tokenizer Python package with pip install tokenizerDownload a RWKV model and place it in your desired directoryDownload the tokens fileUsage​RWKV​To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.from langchain.llms import RWKV# Test the model```pythondef generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.# Instruction:{instruction}# Input:{input}# Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.# Instruction:{instruction}# Response:"""model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")response = model(generate_prompt("Once upon a time, "))Model File​You can find links to model file downloads at the RWKV-4-Raven repository.Rwkv-4 models -> recommended VRAM​RWKV VRAMModel | 8bit | bf16/fp16 | fp3214B | 16GB | 28GB | >50GB7B | 8GB | 14GB | 28GB3B | 2.8GB| 6GB | 12GB1b5 | 1.3GB| 3GB | 6GBSee the rwkv pip page for more information about strategies, including streaming and cuda support.PreviousRunhouseNextScaNNInstallation and SetupUsageRWKVModel FileRwkv-4 models -> recommended VRAMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the RWKV-4 wrapper within LangChain. | This page covers how to use the RWKV-4 wrapper within LangChain. ->: It is broken into two parts: installation and setup, and then usage with an example.Installation and Setup​Install the Python package with pip install rwkvInstall the tokenizer Python package with pip install tokenizerDownload a RWKV model and place it in your desired directoryDownload the tokens fileUsage​RWKV​To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.from langchain.llms import RWKV# Test the model```pythondef generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.# Instruction:{instruction}# Input:{input}# Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.# Instruction:{instruction}# Response:"""model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")response = model(generate_prompt("Once upon a time, "))Model File​You can find links to model file downloads at the RWKV-4-Raven repository.Rwkv-4 models -> recommended VRAM​RWKV VRAMModel | 8bit | bf16/fp16 | fp3214B | 16GB | 28GB | >50GB7B | 8GB | 14GB | 28GB3B | 2.8GB| 6GB | 12GB1b5 | 1.3GB| 3GB | 6GBSee the rwkv pip page for more information about strategies, including streaming and cuda support.PreviousRunhouseNextScaNNInstallation and SetupUsageRWKVModel FileRwkv-4 models -> recommended VRAMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,561 | Petals | ü¶úÔ∏èüîó Langchain | This page covers how to use the Petals ecosystem within LangChain. | This page covers how to use the Petals ecosystem within LangChain. ->: Petals | ü¶úÔ∏èüîó Langchain |
3,562 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Petals ecosystem within LangChain. | This page covers how to use the Petals ecosystem within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,563 | and toolkitsMemoryCallbacksChat loadersProvidersMorePetalsOn this pagePetalsThis page covers how to use the Petals ecosystem within LangChain. | This page covers how to use the Petals ecosystem within LangChain. | This page covers how to use the Petals ecosystem within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMorePetalsOn this pagePetalsThis page covers how to use the Petals ecosystem within LangChain. |
3,564 | It is broken into two parts: installation and setup, and then references to specific Petals wrappers.Installation and Setup​Install with pip install petalsGet a Hugging Face api key and set it as an environment variable (HUGGINGFACE_API_KEY)Wrappers​LLM​There exists an Petals LLM wrapper, which you can access with from langchain.llms import PetalsPreviousOpenWeatherMapNextPostgres EmbeddingInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Petals ecosystem within LangChain. | This page covers how to use the Petals ecosystem within LangChain. ->: It is broken into two parts: installation and setup, and then references to specific Petals wrappers.Installation and Setup​Install with pip install petalsGet a Hugging Face api key and set it as an environment variable (HUGGINGFACE_API_KEY)Wrappers​LLM​There exists an Petals LLM wrapper, which you can access with from langchain.llms import PetalsPreviousOpenWeatherMapNextPostgres EmbeddingInstallation and SetupWrappersLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,565 | Metal | ü¶úÔ∏èüîó Langchain | This page covers how to use Metal within LangChain. | This page covers how to use Metal within LangChain. ->: Metal | ü¶úÔ∏èüîó Langchain |
3,566 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use Metal within LangChain. | This page covers how to use Metal within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,567 | and toolkitsMemoryCallbacksChat loadersProvidersMoreMetalOn this pageMetalThis page covers how to use Metal within LangChain.What is Metal?​Metal is a managed retrieval & memory platform built for production. Easily index your data into Metal and run semantic search and retrieval on it.Quick start​Get started by creating a Metal account.Then, you can easily take advantage of the MetalRetriever class to start retrieving your data for semantic search, prompting context, etc. This class takes a Metal instance and a dictionary of parameters to pass to the Metal API.from langchain.retrievers import MetalRetrieverfrom metal_sdk.metal import Metalmetal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");retriever = MetalRetriever(metal, params={"limit": 2})docs = retriever.get_relevant_documents("search term")PreviousMeilisearchNextMilvusWhat is Metal?Quick startCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use Metal within LangChain. | This page covers how to use Metal within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreMetalOn this pageMetalThis page covers how to use Metal within LangChain.What is Metal?​Metal is a managed retrieval & memory platform built for production. Easily index your data into Metal and run semantic search and retrieval on it.Quick start​Get started by creating a Metal account.Then, you can easily take advantage of the MetalRetriever class to start retrieving your data for semantic search, prompting context, etc. This class takes a Metal instance and a dictionary of parameters to pass to the Metal API.from langchain.retrievers import MetalRetrieverfrom metal_sdk.metal import Metalmetal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");retriever = MetalRetriever(metal, params={"limit": 2})docs = retriever.get_relevant_documents("search term")PreviousMeilisearchNextMilvusWhat is Metal?Quick startCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,568 | scikit-learn | ü¶úÔ∏èüîó Langchain | scikit-learn is an open-source collection of machine learning algorithms, | scikit-learn is an open-source collection of machine learning algorithms, ->: scikit-learn | ü¶úÔ∏èüîó Langchain |
3,569 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | scikit-learn is an open-source collection of machine learning algorithms, | scikit-learn is an open-source collection of machine learning algorithms, ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,570 | and toolkitsMemoryCallbacksChat loadersProvidersMorescikit-learnOn this pagescikit-learnscikit-learn is an open-source collection of machine learning algorithms, | scikit-learn is an open-source collection of machine learning algorithms, | scikit-learn is an open-source collection of machine learning algorithms, ->: and toolkitsMemoryCallbacksChat loadersProvidersMorescikit-learnOn this pagescikit-learnscikit-learn is an open-source collection of machine learning algorithms, |
3,571 | including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.Installation and Setup‚ÄãInstall the Python package with pip install scikit-learnVector Store‚ÄãSKLearnVectorStore provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.To import this vectorstore:from langchain.vectorstores import SKLearnVectorStoreFor a more detailed walkthrough of the SKLearnVectorStore wrapper, see this notebook.PreviousSingleStoreDBNextSlackInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | scikit-learn is an open-source collection of machine learning algorithms, | scikit-learn is an open-source collection of machine learning algorithms, ->: including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.Installation and Setup​Install the Python package with pip install scikit-learnVector Store​SKLearnVectorStore provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.To import this vectorstore:from langchain.vectorstores import SKLearnVectorStoreFor a more detailed walkthrough of the SKLearnVectorStore wrapper, see this notebook.PreviousSingleStoreDBNextSlackInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,572 | Google | ü¶úÔ∏èüîó Langchain | All functionality related to Google Cloud Platform | All functionality related to Google Cloud Platform ->: Google | ü¶úÔ∏èüîó Langchain |
3,573 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersGoogleOn this pageGoogleAll functionality related to Google Cloud PlatformLLMs‚ÄãVertex AI‚ÄãAccess PaLM LLMs like text-bison and code-bison via Google Cloud.from langchain.llms import VertexAIModel Garden‚ÄãAccess PaLM and hundreds of OSS models via Vertex AI Model Garden.from langchain.llms import VertexAIModelGardenChat models‚ÄãVertex AI‚ÄãAccess PaLM chat models like chat-bison and codechat-bison via Google Cloud.from langchain.chat_models import ChatVertexAIDocument Loader‚ÄãGoogle BigQuery‚ÄãGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. | All functionality related to Google Cloud Platform | All functionality related to Google Cloud Platform ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersGoogleOn this pageGoogleAll functionality related to Google Cloud PlatformLLMs‚ÄãVertex AI‚ÄãAccess PaLM LLMs like text-bison and code-bison via Google Cloud.from langchain.llms import VertexAIModel Garden‚ÄãAccess PaLM and hundreds of OSS models via Vertex AI Model Garden.from langchain.llms import VertexAIModelGardenChat models‚ÄãVertex AI‚ÄãAccess PaLM chat models like chat-bison and codechat-bison via Google Cloud.from langchain.chat_models import ChatVertexAIDocument Loader‚ÄãGoogle BigQuery‚ÄãGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. |
3,574 | BigQuery is a part of the Google Cloud Platform.First, we need to install google-cloud-bigquery python package.pip install google-cloud-bigquerySee a usage example.from langchain.document_loaders import BigQueryLoaderGoogle Cloud Storage‚ÄãGoogle Cloud Storage is a managed service for storing unstructured data.First, we need to install google-cloud-storage python package.pip install google-cloud-storageThere are two loaders for the Google Cloud Storage: the Directory and the File loaders.See a usage example.from langchain.document_loaders import GCSDirectoryLoaderSee a usage example.from langchain.document_loaders import GCSFileLoaderGoogle Drive‚ÄãGoogle Drive is a file storage and synchronization service developed by Google.Currently, only Google Docs are supported.First, we need to install several python package.pip install google-api-python-client google-auth-httplib2 google-auth-oauthlibSee a usage example and authorizing instructions.from langchain.document_loaders import GoogleDriveLoaderVector Store‚ÄãGoogle Vertex AI MatchingEngine‚ÄãGoogle Vertex AI Matching Engine provides
the industry's leading high-scale low latency vector database. These vector databases are commonly
referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.We need to install several python packages.pip install tensorflow google-cloud-aiplatform tensorflow-hub tensorflow-textSee a usage example.from langchain.vectorstores import MatchingEngineGoogle ScaNN‚ÄãGoogle ScaNN
(Scalable Nearest Neighbors) is a python package.ScaNN is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner
Product Search and also supports other distance functions such as
Euclidean distance. The implementation is optimized for x86 processors
with AVX2 support. See its Google Research github | All functionality related to Google Cloud Platform | All functionality related to Google Cloud Platform ->: BigQuery is a part of the Google Cloud Platform.First, we need to install google-cloud-bigquery python package.pip install google-cloud-bigquerySee a usage example.from langchain.document_loaders import BigQueryLoaderGoogle Cloud Storage‚ÄãGoogle Cloud Storage is a managed service for storing unstructured data.First, we need to install google-cloud-storage python package.pip install google-cloud-storageThere are two loaders for the Google Cloud Storage: the Directory and the File loaders.See a usage example.from langchain.document_loaders import GCSDirectoryLoaderSee a usage example.from langchain.document_loaders import GCSFileLoaderGoogle Drive‚ÄãGoogle Drive is a file storage and synchronization service developed by Google.Currently, only Google Docs are supported.First, we need to install several python package.pip install google-api-python-client google-auth-httplib2 google-auth-oauthlibSee a usage example and authorizing instructions.from langchain.document_loaders import GoogleDriveLoaderVector Store‚ÄãGoogle Vertex AI MatchingEngine‚ÄãGoogle Vertex AI Matching Engine provides
the industry's leading high-scale low latency vector database. These vector databases are commonly
referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.We need to install several python packages.pip install tensorflow google-cloud-aiplatform tensorflow-hub tensorflow-textSee a usage example.from langchain.vectorstores import MatchingEngineGoogle ScaNN‚ÄãGoogle ScaNN
(Scalable Nearest Neighbors) is a python package.ScaNN is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner
Product Search and also supports other distance functions such as
Euclidean distance. The implementation is optimized for x86 processors
with AVX2 support. See its Google Research github |
3,575 | with AVX2 support. See its Google Research github
for more details.We need to install scann python package.pip install scannSee a usage example.from langchain.vectorstores import ScaNNRetrievers‚ÄãVertex AI Search‚ÄãGoogle Cloud Vertex AI Search
allows developers to quickly build generative AI powered search engines for customers and employees.First, you need to install the google-cloud-discoveryengine Python package.pip install google-cloud-discoveryengineSee a usage example.from langchain.retrievers import GoogleVertexAISearchRetrieverDocument AI Warehouse‚ÄãGoogle Cloud Document AI Warehouse
allows enterprises to search, store, govern, and manage documents and their AI-extracted
data and metadata in a single platform. Documents should be uploaded outside of Langchain,from langchain.retrievers import GoogleDocumentAIWarehouseRetrieverdocai_wh_retriever = GoogleDocumentAIWarehouseRetriever( project_number=...)query = ...documents = docai_wh_retriever.get_relevant_documents( query, user_ldap=...)Tools‚ÄãGoogle Search‚ÄãInstall requirements with pip install google-api-python-clientSet up a Custom Search Engine, following these instructionsGet an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectivelyThere exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSearchAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:from langchain.agents import load_toolstools = load_tools(["google-search"])Document Transformer‚ÄãGoogle Document AI‚ÄãDocument AI is a Google Cloud Platform
service to transform unstructured data from documents into structured data, making it easier
to understand, analyze, and consume. We need to set up a GCS bucket and create your own OCR processor | All functionality related to Google Cloud Platform | All functionality related to Google Cloud Platform ->: with AVX2 support. See its Google Research github
for more details.We need to install scann python package.pip install scannSee a usage example.from langchain.vectorstores import ScaNNRetrievers‚ÄãVertex AI Search‚ÄãGoogle Cloud Vertex AI Search
allows developers to quickly build generative AI powered search engines for customers and employees.First, you need to install the google-cloud-discoveryengine Python package.pip install google-cloud-discoveryengineSee a usage example.from langchain.retrievers import GoogleVertexAISearchRetrieverDocument AI Warehouse‚ÄãGoogle Cloud Document AI Warehouse
allows enterprises to search, store, govern, and manage documents and their AI-extracted
data and metadata in a single platform. Documents should be uploaded outside of Langchain,from langchain.retrievers import GoogleDocumentAIWarehouseRetrieverdocai_wh_retriever = GoogleDocumentAIWarehouseRetriever( project_number=...)query = ...documents = docai_wh_retriever.get_relevant_documents( query, user_ldap=...)Tools‚ÄãGoogle Search‚ÄãInstall requirements with pip install google-api-python-clientSet up a Custom Search Engine, following these instructionsGet an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectivelyThere exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSearchAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:from langchain.agents import load_toolstools = load_tools(["google-search"])Document Transformer‚ÄãGoogle Document AI‚ÄãDocument AI is a Google Cloud Platform
service to transform unstructured data from documents into structured data, making it easier
to understand, analyze, and consume. We need to set up a GCS bucket and create your own OCR processor |
3,576 | The GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://)
and a processor name should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID.
We can get it either programmatically or copy from the Prediction endpoint section of the Processor details
tab in the Google Cloud Console.pip install google-cloud-documentaipip install google-cloud-documentai-toolboxSee a usage example.from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserPreviousAWSNextMicrosoftLLMsVertex AIModel GardenChat modelsVertex AIDocument LoaderGoogle BigQueryGoogle Cloud StorageGoogle DriveVector StoreGoogle Vertex AI MatchingEngineGoogle ScaNNRetrieversVertex AI SearchDocument AI WarehouseToolsGoogle SearchDocument TransformerGoogle Document AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | All functionality related to Google Cloud Platform | All functionality related to Google Cloud Platform ->: The GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://)
and a processor name should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID.
We can get it either programmatically or copy from the Prediction endpoint section of the Processor details
tab in the Google Cloud Console.pip install google-cloud-documentaipip install google-cloud-documentai-toolboxSee a usage example.from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserPreviousAWSNextMicrosoftLLMsVertex AIModel GardenChat modelsVertex AIDocument LoaderGoogle BigQueryGoogle Cloud StorageGoogle DriveVector StoreGoogle Vertex AI MatchingEngineGoogle ScaNNRetrieversVertex AI SearchDocument AI WarehouseToolsGoogle SearchDocument TransformerGoogle Document AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,577 | YouTube | ü¶úÔ∏èüîó Langchain | YouTube is an online video sharing and social media platform by Google. | YouTube is an online video sharing and social media platform by Google. ->: YouTube | ü¶úÔ∏èüîó Langchain |
3,578 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | YouTube is an online video sharing and social media platform by Google. | YouTube is an online video sharing and social media platform by Google. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,579 | and toolkitsMemoryCallbacksChat loadersProvidersMoreYouTubeOn this pageYouTubeYouTube is an online video sharing and social media platform by Google. | YouTube is an online video sharing and social media platform by Google. | YouTube is an online video sharing and social media platform by Google. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreYouTubeOn this pageYouTubeYouTube is an online video sharing and social media platform by Google. |
3,580 | We download the YouTube transcripts and video information.Installation and Setup​pip install youtube-transcript-apipip install pytubeSee a usage example.Document Loader​See a usage example.from langchain.document_loaders import YoutubeLoaderfrom langchain.document_loaders import GoogleApiYoutubeLoaderPreviousYeager.aiNextZepInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | YouTube is an online video sharing and social media platform by Google. | YouTube is an online video sharing and social media platform by Google. ->: We download the YouTube transcripts and video information.Installation and Setup​pip install youtube-transcript-apipip install pytubeSee a usage example.Document Loader​See a usage example.from langchain.document_loaders import YoutubeLoaderfrom langchain.document_loaders import GoogleApiYoutubeLoaderPreviousYeager.aiNextZepInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,581 | Databricks | ü¶úÔ∏èüîó Langchain | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: Databricks | ü¶úÔ∏èüîó Langchain |
3,582 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,583 | and toolkitsMemoryCallbacksChat loadersProvidersMoreDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.Databricks embraces the LangChain ecosystem in various ways:Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChainDatabricks MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer stepsDatabricks MLflow AI GatewayDatabricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.DatabricksDatabricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face HubDatabricks connector for the SQLDatabase Chain‚ÄãYou can connect to Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.Databricks embraces the LangChain ecosystem in various ways:Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChainDatabricks MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer stepsDatabricks MLflow AI GatewayDatabricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.DatabricksDatabricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face HubDatabricks connector for the SQLDatabase Chain‚ÄãYou can connect to Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain. |
3,584 | See the notebook Connect to Databricks for details.Databricks MLflow integrates with LangChain​MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook MLflow Callback Handler for details about MLflow's integration with LangChain.Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See MLflow guide for more details.Databricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.Databricks MLflow AI Gateway​See MLflow AI Gateway.Databricks as an LLM provider​The notebook Wrap Databricks endpoints as LLMs illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development. Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.Databricks Dolly​Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: See the notebook Connect to Databricks for details.Databricks MLflow integrates with LangChain​MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook MLflow Callback Handler for details about MLflow's integration with LangChain.Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See MLflow guide for more details.Databricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.Databricks MLflow AI Gateway​See MLflow AI Gateway.Databricks as an LLM provider​The notebook Wrap Databricks endpoints as LLMs illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development. Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.Databricks Dolly​Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is |
3,585 | that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook Hugging Face Hub for instructions to access it through the Hugging Face Hub integration with LangChain.PreviousDashVectorNextDatadog TracingDatabricks connector for the SQLDatabase ChainDatabricks MLflow integrates with LangChainDatabricks MLflow AI GatewayDatabricks as an LLM providerDatabricks DollyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. | The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook Hugging Face Hub for instructions to access it through the Hugging Face Hub integration with LangChain.PreviousDashVectorNextDatadog TracingDatabricks connector for the SQLDatabase ChainDatabricks MLflow integrates with LangChainDatabricks MLflow AI GatewayDatabricks as an LLM providerDatabricks DollyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,586 | Cookbook | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | The page you're looking for has been moved to the cookbook section of the repo as a notebook. | The page you're looking for has been moved to the cookbook section of the repo as a notebook. ->: Cookbook | 🦜�🔗 Langchain
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKCookbookThe page you're looking for has been moved to the cookbook section of the repo as a notebook.CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,587 | Yeager.ai | ü¶úÔ∏èüîó Langchain | This page covers how to use Yeager.ai to generate LangChain tools and agents. | This page covers how to use Yeager.ai to generate LangChain tools and agents. ->: Yeager.ai | ü¶úÔ∏èüîó Langchain |
3,588 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use Yeager.ai to generate LangChain tools and agents. | This page covers how to use Yeager.ai to generate LangChain tools and agents. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,589 | and toolkitsMemoryCallbacksChat loadersProvidersMoreYeager.aiOn this pageYeager.aiThis page covers how to use Yeager.ai to generate LangChain tools and agents.What is Yeager.ai?‚ÄãYeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools. It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.yAgents‚ÄãLow code generative agent designed to help you build, prototype, and deploy Langchain tools with ease. How to use?‚Äãpip install yeagerai-agentyeagerai-agentGo to http://127.0.0.1:7860This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".OPENAI_API_KEY=<your_openai_api_key_here>We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.Creating and Executing Tools with yAgents‚ÄãyAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: | This page covers how to use Yeager.ai to generate LangChain tools and agents. | This page covers how to use Yeager.ai to generate LangChain tools and agents. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreYeager.aiOn this pageYeager.aiThis page covers how to use Yeager.ai to generate LangChain tools and agents.What is Yeager.ai?‚ÄãYeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools. It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.yAgents‚ÄãLow code generative agent designed to help you build, prototype, and deploy Langchain tools with ease. How to use?‚Äãpip install yeagerai-agentyeagerai-agentGo to http://127.0.0.1:7860This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".OPENAI_API_KEY=<your_openai_api_key_here>We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.Creating and Executing Tools with yAgents‚ÄãyAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: |
3,590 | create a tool that returns the n-th prime numberLoad the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkitExecute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime numberYou can see a video of how it works here.As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.For more information, see yAgents' Github or our docsPreviousYandexNextYouTubeWhat is Yeager.ai?yAgentsHow to use?Creating and Executing Tools with yAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use Yeager.ai to generate LangChain tools and agents. | This page covers how to use Yeager.ai to generate LangChain tools and agents. ->: create a tool that returns the n-th prime numberLoad the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkitExecute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime numberYou can see a video of how it works here.As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.For more information, see yAgents' Github or our docsPreviousYandexNextYouTubeWhat is Yeager.ai?yAgentsHow to use?Creating and Executing Tools with yAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,591 | Redis | ü¶úÔ∏èüîó Langchain | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: Redis | ü¶úÔ∏èüîó Langchain |
3,592 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,593 | and toolkitsMemoryCallbacksChat loadersProvidersMoreRedisOn this pageRedisRedis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreRedisOn this pageRedisRedis (Remote Dictionary Server) is an open-source in-memory storage, |
3,594 | used as a distributed, in-memory key–value database, cache and message broker, with optional durability.
Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes,
making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database,
and one of the most popular databases overall.This page covers how to use the Redis ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.Installation and Setup‚ÄãInstall the Python SDK:pip install redisWrappers‚ÄãAll wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
or a High-Availability setup with Replication and Redis Sentinels.Redis Standalone connection url‚ÄãFor standalone Redis server, the official redis connection url formats can be used as describe in the python redis modules
"from_url()" method Redis.from_urlExample: redis_url = "redis://:secret-pass@localhost:6379/0"Redis Sentinel connection url‚ÄãFor Redis sentinel setups the connection scheme is "redis+sentinel".
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
for Sentinels available.Example: redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"The format is redis+sentinel://[[username]:[password]]@[host-or-ip]:[port]/[service-name]/[db-number]
with the default values of "service-name = mymaster" and "db-number = 0" if not set explicit.
The service-name is the redis server monitoring group name as configured within the Sentinel. The current url format limits the connection string to one sentinel host only (no list can be given) and
booth Redis server and sentinel must have the same password set (if used).Redis Cluster connection url​Redis cluster is not supported right now for all methods requiring a "redis_url" parameter. | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: used as a distributed, in-memory key–value database, cache and message broker, with optional durability.
Because it holds all data in memory and because of its design, Redis offers low-latency reads and writes,
making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database,
and one of the most popular databases overall.This page covers how to use the Redis ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.Installation and Setup‚ÄãInstall the Python SDK:pip install redisWrappers‚ÄãAll wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
or a High-Availability setup with Replication and Redis Sentinels.Redis Standalone connection url‚ÄãFor standalone Redis server, the official redis connection url formats can be used as describe in the python redis modules
"from_url()" method Redis.from_urlExample: redis_url = "redis://:secret-pass@localhost:6379/0"Redis Sentinel connection url‚ÄãFor Redis sentinel setups the connection scheme is "redis+sentinel".
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
for Sentinels available.Example: redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"The format is redis+sentinel://[[username]:[password]]@[host-or-ip]:[port]/[service-name]/[db-number]
with the default values of "service-name = mymaster" and "db-number = 0" if not set explicit.
The service-name is the redis server monitoring group name as configured within the Sentinel. The current url format limits the connection string to one sentinel host only (no list can be given) and
booth Redis server and sentinel must have the same password set (if used).Redis Cluster connection url‚ÄãRedis cluster is not supported right now for all methods requiring a "redis_url" parameter. |
3,595 | The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like RedisCache | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like RedisCache |
3,596 | (example below).Cache‚ÄãThe Cache wrapper allows for Redis to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.Standard Cache‚ÄãThe standard cache is the Redis bread & butter of use case in production for both open-source and enterprise users globally.To import this cache:from langchain.cache import RedisCacheTo use this cache with your LLMs:from langchain.globals import set_llm_cacheimport redisredis_client = redis.Redis.from_url(...)set_llm_cache(RedisCache(redis_client))Semantic Cache‚ÄãSemantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.To import this cache:from langchain.cache import RedisSemanticCacheTo use this cache with your LLMs:from langchain.globals import set_llm_cacheimport redis# use any embedding provider...from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddingsredis_url = "redis://localhost:6379"set_llm_cache(RedisSemanticCache( embedding=FakeEmbeddings(), redis_url=redis_url))VectorStore‚ÄãThe vectorstore wrapper turns Redis into a low-latency vector database for semantic search or LLM content retrieval.To import this vectorstore:from langchain.vectorstores import RedisFor a more detailed walkthrough of the Redis vectorstore wrapper, see this notebook.Retriever‚ÄãThe Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call .as_retriever() on the base vectorstore class.Memory‚ÄãRedis can be used to persist LLM conversations.Vector Store Retriever Memory‚ÄãFor a more detailed walkthrough of the VectorStoreRetrieverMemory wrapper, see this notebook.Chat Message History Memory‚ÄãFor a detailed example of Redis to cache conversation message history, see this notebook.PreviousRedditNextReplicateInstallation and SetupWrappersRedis Standalone connection | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: (example below).Cache‚ÄãThe Cache wrapper allows for Redis to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.Standard Cache‚ÄãThe standard cache is the Redis bread & butter of use case in production for both open-source and enterprise users globally.To import this cache:from langchain.cache import RedisCacheTo use this cache with your LLMs:from langchain.globals import set_llm_cacheimport redisredis_client = redis.Redis.from_url(...)set_llm_cache(RedisCache(redis_client))Semantic Cache‚ÄãSemantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.To import this cache:from langchain.cache import RedisSemanticCacheTo use this cache with your LLMs:from langchain.globals import set_llm_cacheimport redis# use any embedding provider...from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddingsredis_url = "redis://localhost:6379"set_llm_cache(RedisSemanticCache( embedding=FakeEmbeddings(), redis_url=redis_url))VectorStore‚ÄãThe vectorstore wrapper turns Redis into a low-latency vector database for semantic search or LLM content retrieval.To import this vectorstore:from langchain.vectorstores import RedisFor a more detailed walkthrough of the Redis vectorstore wrapper, see this notebook.Retriever‚ÄãThe Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call .as_retriever() on the base vectorstore class.Memory‚ÄãRedis can be used to persist LLM conversations.Vector Store Retriever Memory‚ÄãFor a more detailed walkthrough of the VectorStoreRetrieverMemory wrapper, see this notebook.Chat Message History Memory‚ÄãFor a detailed example of Redis to cache conversation message history, see this notebook.PreviousRedditNextReplicateInstallation and SetupWrappersRedis Standalone connection |
3,597 | and SetupWrappersRedis Standalone connection urlRedis Sentinel connection urlRedis Cluster connection urlCacheVectorStoreRetrieverMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Redis (Remote Dictionary Server) is an open-source in-memory storage, | Redis (Remote Dictionary Server) is an open-source in-memory storage, ->: and SetupWrappersRedis Standalone connection urlRedis Sentinel connection urlRedis Cluster connection urlCacheVectorStoreRetrieverMemoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,598 | DataForSEO | ü¶úÔ∏èüîó Langchain | This page provides instructions on how to use the DataForSEO search APIs within LangChain. | This page provides instructions on how to use the DataForSEO search APIs within LangChain. ->: DataForSEO | ü¶úÔ∏èüîó Langchain |
3,599 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page provides instructions on how to use the DataForSEO search APIs within LangChain. | This page provides instructions on how to use the DataForSEO search APIs within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
Subsets and Splits