Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
3,900
and toolkitsMemoryCallbacksChat loadersProvidersMoreSupabase (Postgres)On this pageSupabase (Postgres)Supabase is an open-source Firebase alternative.
Supabase is an open-source Firebase alternative.
Supabase is an open-source Firebase alternative. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreSupabase (Postgres)On this pageSupabase (Postgres)Supabase is an open-source Firebase alternative.
3,901
Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.Installation and Setup​We need to install supabase python package.pip install supabaseVector Store​See a usage example.from langchain.vectorstores import SupabaseVectorStorePreviousStripeNextNebulaInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Supabase is an open-source Firebase alternative.
Supabase is an open-source Firebase alternative. ->: Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.Installation and Setup​We need to install supabase python package.pip install supabaseVector Store​See a usage example.from langchain.vectorstores import SupabaseVectorStorePreviousStripeNextNebulaInstallation and SetupVector StoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,902
Google Serper | 🦜️🔗 Langchain
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. ->: Google Serper | 🦜️🔗 Langchain
3,903
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
3,904
and toolkitsMemoryCallbacksChat loadersProvidersMoreGoogle SerperOn this pageGoogle SerperThis page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreGoogle SerperOn this pageGoogle SerperThis page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
3,905
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.Setup​Go to serper.dev to sign up for a free accountGet the api key and set it as an environment variable (SERPER_API_KEY)Wrappers​Utility​There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSerperAPIWrapperYou can use it as part of a Self Ask chain:from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.llms.openai import OpenAIfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypeimport osos.environ["SERPER_API_KEY"] = ""os.environ['OPENAI_API_KEY'] = ""llm = OpenAI(temperature=0)search = GoogleSerperAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search" )]self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")Output​Entering new AgentExecutor chain... Yes.Follow up: Who is the reigning men's U.S. Open champion?Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.Follow up: Where is Carlos Alcaraz from?Intermediate answer: El Palmar, SpainSo the final answer is: El Palmar, Spain> Finished chain.'El Palmar, Spain'For a more detailed walkthrough of this wrapper, see this notebook.Tool​You can also easily load this wrapper as a Tool (to use with an Agent). You can do this with:from langchain.agents import load_toolstools = load_tools(["google-serper"])For more information on tools, see this page.PreviousGoogle Document AINextGooseAISetupWrappersUtilityToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. ->: It is broken into two parts: setup, and then references to the specific Google Serper wrapper.Setup​Go to serper.dev to sign up for a free accountGet the api key and set it as an environment variable (SERPER_API_KEY)Wrappers​Utility​There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSerperAPIWrapperYou can use it as part of a Self Ask chain:from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.llms.openai import OpenAIfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypeimport osos.environ["SERPER_API_KEY"] = ""os.environ['OPENAI_API_KEY'] = ""llm = OpenAI(temperature=0)search = GoogleSerperAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search" )]self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")Output​Entering new AgentExecutor chain... Yes.Follow up: Who is the reigning men's U.S. Open champion?Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.Follow up: Where is Carlos Alcaraz from?Intermediate answer: El Palmar, SpainSo the final answer is: El Palmar, Spain> Finished chain.'El Palmar, Spain'For a more detailed walkthrough of this wrapper, see this notebook.Tool​You can also easily load this wrapper as a Tool (to use with an Agent). You can do this with:from langchain.agents import load_toolstools = load_tools(["google-serper"])For more information on tools, see this page.PreviousGoogle Document AINextGooseAISetupWrappersUtilityToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,906
Yandex | 🦜️🔗 Langchain
All functionality related to Yandex Cloud
All functionality related to Yandex Cloud ->: Yandex | 🦜️🔗 Langchain
3,907
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
All functionality related to Yandex Cloud
All functionality related to Yandex Cloud ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
3,908
and toolkitsMemoryCallbacksChat loadersProvidersMoreYandexOn this pageYandexAll functionality related to Yandex CloudYandex Cloud is a public cloud platform. Installation and Setup​Yandex Cloud SDK can be installed via pip from PyPI: pip install yandexcloudLLMs​YandexGPT​See a usage example.from langchain.llms import YandexGPTChat models​YandexGPT​See a usage example.from langchain.chat_models import ChatYandexGPTPreviousXorbits Inference (Xinference)NextYeager.aiInstallation and SetupLLMsYandexGPTChat modelsYandexGPTCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
All functionality related to Yandex Cloud
All functionality related to Yandex Cloud ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreYandexOn this pageYandexAll functionality related to Yandex CloudYandex Cloud is a public cloud platform. Installation and Setup​Yandex Cloud SDK can be installed via pip from PyPI: pip install yandexcloudLLMs​YandexGPT​See a usage example.from langchain.llms import YandexGPTChat models​YandexGPT​See a usage example.from langchain.chat_models import ChatYandexGPTPreviousXorbits Inference (Xinference)NextYeager.aiInstallation and SetupLLMsYandexGPTChat modelsYandexGPTCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,909
GPT4All | 🦜️🔗 Langchain
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example. ->: GPT4All | 🦜️🔗 Langchain
3,910
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ✨Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMotörheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat
3,911
and toolkitsMemoryCallbacksChat loadersProvidersMoreGPT4AllOn this pageGPT4AllThis page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.Installation and Setup​Install the Python package with pip install pyllamacppDownload a GPT4All model and place it in your desired directoryUsage​GPT4All​To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration.from langchain.llms import GPT4All# Instantiate the model. Callbacks support token-wise streamingmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)# Generate textresponse = model("Once upon a time, ")You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.To stream the model's predictions, add in a CallbackManager.from langchain.llms import GPT4Allfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler# There are many CallbackHandlers supported, such as# from langchain.callbacks.streamlit import StreamlitCallbackHandlercallbacks = [StreamingStdOutCallbackHandler()]model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)# Generate text. Tokens are streamed through the callback manager.model("Once upon a time, ", callbacks=callbacks)Model File​You can find links to model file downloads in the pyllamacpp repository.For a more detailed walkthrough of this, see this notebookPreviousGooseAINextGradientInstallation and SetupUsageGPT4AllModel FileCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreGPT4AllOn this pageGPT4AllThis page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.Installation and Setup​Install the Python package with pip install pyllamacppDownload a GPT4All model and place it in your desired directoryUsage​GPT4All​To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration.from langchain.llms import GPT4All# Instantiate the model. Callbacks support token-wise streamingmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)# Generate textresponse = model("Once upon a time, ")You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.To stream the model's predictions, add in a CallbackManager.from langchain.llms import GPT4Allfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler# There are many CallbackHandlers supported, such as# from langchain.callbacks.streamlit import StreamlitCallbackHandlercallbacks = [StreamingStdOutCallbackHandler()]model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)# Generate text. Tokens are streamed through the callback manager.model("Once upon a time, ", callbacks=callbacks)Model File​You can find links to model file downloads in the pyllamacpp repository.For a more detailed walkthrough of this, see this notebookPreviousGooseAINextGradientInstallation and SetupUsageGPT4AllModel FileCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,912
AWS | 🦜️🔗 Langchain
All functionality related to Amazon AWS platform
All functionality related to Amazon AWS platform ->: AWS | 🦜️🔗 Langchain
3,913
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMs​Bedrock​See a usage example.from langchain.llms.bedrock import BedrockAmazon API Gateway​Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker Endpoint​Amazon SageMaker is a system
All functionality related to Amazon AWS platform
All functionality related to Amazon AWS platform ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMs​Bedrock​See a usage example.from langchain.llms.bedrock import BedrockAmazon API Gateway​Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker Endpoint​Amazon SageMaker is a system
3,914
Endpoint‚ÄãAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding Models‚ÄãBedrock‚ÄãSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker Endpoint‚ÄãSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loaders‚ÄãAWS S3 Directory and File‚ÄãAmazon Simple Storage Service (Amazon S3) is an object storage service.
All functionality related to Amazon AWS platform
All functionality related to Amazon AWS platform ->: Endpoint‚ÄãAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding Models‚ÄãBedrock‚ÄãSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker Endpoint‚ÄãSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loaders‚ÄãAWS S3 Directory and File‚ÄãAmazon Simple Storage Service (Amazon S3) is an object storage service.
3,915
AWS S3 Directory AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemory​AWS DynamoDB​AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
All functionality related to Amazon AWS platform
All functionality related to Amazon AWS platform ->: AWS S3 Directory AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemory​AWS DynamoDB​AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,916
Slack | 🦜️🔗 Langchain
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages. ->: Slack | 🦜️🔗 Langchain
3,917
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersSlackOn this pageSlackThis notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.The process has three steps:Export the desired conversation thread by following the instructions here.Create the SlackChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Create message dump​Currently (2023/08/23) this loader best supports a zip directory of files in the format generated by exporting your a direct message conversation from Slack. Follow up-to-date instructions from slack on how to do so.We have an example in the LangChain repo.import requestspermalink = "https://raw.githubusercontent.com/langchain-ai/langchain/342087bdfa3ac31d622385d0f2d09cf5e06c8db3/libs/langchain/tests/integration_tests/examples/slack_export.zip"response = requests.get(permalink)with open("slack_dump.zip", "wb") as f: f.write(response.content)2. Create the Chat Loader​Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.slack import SlackChatLoaderloader = SlackChatLoader( path="slack_dump.zip",)3. Load
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersSlackOn this pageSlackThis notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.The process has three steps:Export the desired conversation thread by following the instructions here.Create the SlackChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Create message dump​Currently (2023/08/23) this loader best supports a zip directory of files in the format generated by exporting your a direct message conversation from Slack. Follow up-to-date instructions from slack on how to do so.We have an example in the LangChain repo.import requestspermalink = "https://raw.githubusercontent.com/langchain-ai/langchain/342087bdfa3ac31d622385d0f2d09cf5e06c8db3/libs/langchain/tests/integration_tests/examples/slack_export.zip"response = requests.get(permalink)with open("slack_dump.zip", "wb") as f: f.write(response.content)2. Create the Chat Loader​Provide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.slack import SlackChatLoaderloader = SlackChatLoader( path="slack_dump.zip",)3. Load
3,918
path="slack_dump.zip",)3. Load messages​The load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "U0500003428" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="U0500003428"))Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message. from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[1]['messages']): print(chunk.content, end="", flush=True) Hi, I hope you're doing well. I wanted to reach out and ask if you'd be available to meet up for coffee sometime next week. I'd love to catch up and hear about what's been going on in your life. Let me know if you're interested and we can find a time that works for both of us. Looking forward to hearing from you! Best, [Your Name]PreviousFine-Tuning on LangSmith LLM RunsNexttelegram1. Create message dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.
This notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages. ->: path="slack_dump.zip",)3. Load messages​The load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "U0500003428" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="U0500003428"))Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message. from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[1]['messages']): print(chunk.content, end="", flush=True) Hi, I hope you're doing well. I wanted to reach out and ask if you'd be available to meet up for coffee sometime next week. I'd love to catch up and hear about what's been going on in your life. Let me know if you're interested and we can find a time that works for both of us. Looking forward to hearing from you! Best, [Your Name]PreviousFine-Tuning on LangSmith LLM RunsNexttelegram1. Create message dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,919
telegram | 🦜️🔗 Langchain
Telegram
Telegram ->: telegram | 🦜️🔗 Langchain
3,920
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loaderstelegramOn this pagetelegramTelegramThis notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCreate the TelegramChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Create message dump​Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the Telegram Desktop App.Important: There are 'lite' versions of telegram such as "Telegram for MacOS" that lack the export functionality. Please make sure you use the correct app to export the file.To make the export:Download and open telegram desktopSelect a conversationNavigate to the conversation settings (currently the three dots in the top right corner)Click "Export Chat History"Unselect photos and other media. Select "Machine-readable JSON" format to export.An example is below: telegram_conversation.json{ "name": "Jiminy", "type": "personal_chat", "id": 5965280513, "messages": [ { "id": 1, "type": "message", "date": "2023-08-23T13:11:23", "date_unixtime":
Telegram
Telegram ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loaderstelegramOn this pagetelegramTelegramThis notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCreate the TelegramChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Create message dump​Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the Telegram Desktop App.Important: There are 'lite' versions of telegram such as "Telegram for MacOS" that lack the export functionality. Please make sure you use the correct app to export the file.To make the export:Download and open telegram desktopSelect a conversationNavigate to the conversation settings (currently the three dots in the top right corner)Click "Export Chat History"Unselect photos and other media. Select "Machine-readable JSON" format to export.An example is below: telegram_conversation.json{ "name": "Jiminy", "type": "personal_chat", "id": 5965280513, "messages": [ { "id": 1, "type": "message", "date": "2023-08-23T13:11:23", "date_unixtime":
3,921
"date": "2023-08-23T13:11:23", "date_unixtime": "1692821483", "from": "Jiminy Cricket", "from_id": "user123450513", "text": "You better trust your conscience", "text_entities": [ { "type": "plain", "text": "You better trust your conscience" } ] }, { "id": 2, "type": "message", "date": "2023-08-23T13:13:20", "date_unixtime": "1692821600", "from": "Batman & Robin", "from_id": "user6565661032", "text": "What did you just say?", "text_entities": [ { "type": "plain", "text": "What did you just say?" } ] } ]}2. Create the Chat Loader‚ÄãAll that's required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.telegram import TelegramChatLoaderloader = TelegramChatLoader( path="./telegram_conversation.json", )3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Jiminy Cricket" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Jiminy Cricket"))Next Steps‚ÄãYou can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) I said, "You better trust your conscience."PreviousSlackNextTwitter (via Apify)1. Create message dump2. Create the Chat Loader3. Load messagesNext
Telegram
Telegram ->: "date": "2023-08-23T13:11:23", "date_unixtime": "1692821483", "from": "Jiminy Cricket", "from_id": "user123450513", "text": "You better trust your conscience", "text_entities": [ { "type": "plain", "text": "You better trust your conscience" } ] }, { "id": 2, "type": "message", "date": "2023-08-23T13:13:20", "date_unixtime": "1692821600", "from": "Batman & Robin", "from_id": "user6565661032", "text": "What did you just say?", "text_entities": [ { "type": "plain", "text": "What did you just say?" } ] } ]}2. Create the Chat Loader‚ÄãAll that's required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.telegram import TelegramChatLoaderloader = TelegramChatLoader( path="./telegram_conversation.json", )3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Jiminy Cricket" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Jiminy Cricket"))Next Steps‚ÄãYou can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) I said, "You better trust your conscience."PreviousSlackNextTwitter (via Apify)1. Create message dump2. Create the Chat Loader3. Load messagesNext
3,922
dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Telegram
Telegram ->: dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,923
Discord | 🦜️🔗 Langchain
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: Discord | 🦜️🔗 Langchain
3,924
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersDiscordOn this pageDiscordThis notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.The process has four steps:Create the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCopy the chat loader definition from below to a local file.Initialize the DiscordChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​Currently (2023/08/23) this loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.discord_chats.txttalkingtower — 08/15/2023 11:10 AMLove music! Do you like jazz?reporterbob — 08/15/2023 9:27 PMYes! Jazz is fantastic. Ever heard this one?WebsiteListen to classic jazz track...talkingtower — Yesterday at 5:03 AMIndeed! Great choice. 🎷reporterbob — Yesterday at 5:23 AMThanks! How about some virtual sightseeing?WebsiteVirtual tour of famous landmarks...talkingtower — Today at 2:38 PMSounds fun! Let's explore.reporterbob — Today at 2:56 PMEnjoy the tour! See you around.talkingtower — Today at 3:00 PMThank you! Goodbye! 👋reporterbob — Today at 3:02 PMFarewell! Happy exploring.2. Define chat loader​LangChain currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import BaseMessage,
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersDiscordOn this pageDiscordThis notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.The process has four steps:Create the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCopy the chat loader definition from below to a local file.Initialize the DiscordChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​Currently (2023/08/23) this loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.discord_chats.txttalkingtower — 08/15/2023 11:10 AMLove music! Do you like jazz?reporterbob — 08/15/2023 9:27 PMYes! Jazz is fantastic. Ever heard this one?WebsiteListen to classic jazz track...talkingtower — Yesterday at 5:03 AMIndeed! Great choice. 🎷reporterbob — Yesterday at 5:23 AMThanks! How about some virtual sightseeing?WebsiteVirtual tour of famous landmarks...talkingtower — Today at 2:38 PMSounds fun! Let's explore.reporterbob — Today at 2:56 PMEnjoy the tour! See you around.talkingtower — Today at 3:00 PMThank you! Goodbye! 👋reporterbob — Today at 3:02 PMFarewell! Happy exploring.2. Define chat loader​LangChain currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import BaseMessage,
3,925
Listfrom langchain.schema import BaseMessage, HumanMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class DiscordChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(.+?) — (\w{3,9} \d{1,2}(?:st|nd|rd|th)?(?:, \d{4})? \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa flags=re.DOTALL, ) def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match( r".+? — (\d{2}/\d{2}/\d{4} \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa line, ): if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_sender, current_timestamp = line.split(" — ")[:2]
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: Listfrom langchain.schema import BaseMessage, HumanMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class DiscordChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(.+?) — (\w{3,9} \d{1,2}(?:st|nd|rd|th)?(?:, \d{4})? \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa flags=re.DOTALL, ) def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match( r".+? — (\d{2}/\d{2}/\d{4} \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa line, ): if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_sender, current_timestamp = line.split(" — ")[:2]
3,926
current_timestamp = line.split(" — ")[:2] current_content = [ line[len(current_sender) + len(current_timestamp) + 4 :].strip() ] elif re.match(r"\[\d{1,2}:\d{2} (?:AM|PM)\]", line.strip()): results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_timestamp = line.strip()[1:-1] current_content = [] else: current_content.append("\n" + line.strip()) if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loader​We will point to the file we just wrote to disk.loader = DiscordChatLoader( path="./discord_chats.txt",)3. Load Messages​Assuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: current_timestamp = line.split(" — ")[:2] current_content = [ line[len(current_sender) + len(current_timestamp) + 4 :].strip() ] elif re.match(r"\[\d{1,2}:\d{2} (?:AM|PM)\]", line.strip()): results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_timestamp = line.strip()[1:-1] current_content = [] else: current_content.append("\n" + line.strip()) if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loader​We will point to the file we just wrote to disk.loader = DiscordChatLoader( path="./discord_chats.txt",)3. Load Messages​Assuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the
3,927
Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "talkingtower" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="talkingtower"))messages [{'messages': [AIMessage(content='Love music! Do you like jazz?', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': '08/15/2023 11:10 AM\n'}]}, example=False), HumanMessage(content='Yes! Jazz is fantastic. Ever heard this one?\nWebsite\nListen to classic jazz track...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': '08/15/2023 9:27 PM\n'}]}, example=False), AIMessage(content='Indeed! Great choice. üé∑', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Yesterday at 5:03 AM\n'}]}, example=False), HumanMessage(content='Thanks! How about some virtual sightseeing?\nWebsite\nVirtual tour of famous landmarks...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Yesterday at 5:23 AM\n'}]}, example=False), AIMessage(content="Sounds fun! Let's explore.", additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 2:38 PM\n'}]}, example=False), HumanMessage(content='Enjoy the tour! See you around.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 2:56 PM\n'}]}, example=False), AIMessage(content='Thank you! Goodbye! üëã', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 3:00 PM\n'}]}, example=False), HumanMessage(content='Farewell! Happy exploring.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 3:02 PM\n'}]}, example=False)]}]Next Steps‚ÄãYou can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm =
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "talkingtower" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="talkingtower"))messages [{'messages': [AIMessage(content='Love music! Do you like jazz?', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': '08/15/2023 11:10 AM\n'}]}, example=False), HumanMessage(content='Yes! Jazz is fantastic. Ever heard this one?\nWebsite\nListen to classic jazz track...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': '08/15/2023 9:27 PM\n'}]}, example=False), AIMessage(content='Indeed! Great choice. üé∑', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Yesterday at 5:03 AM\n'}]}, example=False), HumanMessage(content='Thanks! How about some virtual sightseeing?\nWebsite\nVirtual tour of famous landmarks...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Yesterday at 5:23 AM\n'}]}, example=False), AIMessage(content="Sounds fun! Let's explore.", additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 2:38 PM\n'}]}, example=False), HumanMessage(content='Enjoy the tour! See you around.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 2:56 PM\n'}]}, example=False), AIMessage(content='Thank you! Goodbye! üëã', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 3:00 PM\n'}]}, example=False), HumanMessage(content='Farewell! Happy exploring.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 3:02 PM\n'}]}, example=False)]}]Next Steps‚ÄãYou can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm =
3,928
from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you! Have a wonderful day! 🌟PreviousChat loadersNextFacebook Messenger1. Create message dump2. Define chat loader2. Create loader3. Load MessagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.
This notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages. ->: from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you! Have a wonderful day! 🌟PreviousChat loadersNextFacebook Messenger1. Create message dump2. Define chat loader2. Create loader3. Load MessagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,929
Fine-Tuning on LangSmith LLM Runs | 🦜️🔗 Langchain
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. ->: Fine-Tuning on LangSmith LLM Runs | 🦜️🔗 Langchain
3,930
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFine-Tuning on LangSmith LLM RunsOn this pageFine-Tuning on LangSmith LLM RunsThis notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. The process is simple and comprises 3 steps.Select the LLM runs to train on.Use the LangSmithRunChatLoader to load runs as chat sessions.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.Prerequisites​Ensure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]project_name = f"Run Fine-tuning Walkthrough {uid}"os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"os.environ["LANGCHAIN_PROJECT"] = project_name1. Select Runs​The first step is selecting which runs to fine-tune on. A common case would be to select LLM runs within traces that have received positive user feedback. You can find examples of this in theLangSmith Cookbook and in the docs.For the sake of this tutorial, we will generate some runs for you to use here. Let's try fine-tuning a
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFine-Tuning on LangSmith LLM RunsOn this pageFine-Tuning on LangSmith LLM RunsThis notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. The process is simple and comprises 3 steps.Select the LLM runs to train on.Use the LangSmithRunChatLoader to load runs as chat sessions.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.Prerequisites​Ensure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]project_name = f"Run Fine-tuning Walkthrough {uid}"os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"os.environ["LANGCHAIN_PROJECT"] = project_name1. Select Runs​The first step is selecting which runs to fine-tune on. A common case would be to select LLM runs within traces that have received positive user feedback. You can find examples of this in theLangSmith Cookbook and in the docs.For the sake of this tutorial, we will generate some runs for you to use here. Let's try fine-tuning a
3,931
simple function-calling chain.from langchain.pydantic_v1 import BaseModel, Fieldfrom enum import Enumclass Operation(Enum): add = "+" subtract = "-" multiply = "*" divide = "/"class Calculator(BaseModel): """A calculator function""" num1: float num2: float operation: Operation = Field(..., description="+,-,*,/") def calculate(self): if self.operation == Operation.add: return self.num1 + self.num2 elif self.operation == Operation.subtract: return self.num1 - self.num2 elif self.operation == Operation.multiply: return self.num1 * self.num2 elif self.operation == Operation.divide: if self.num2 != 0: return self.num1 / self.num2 else: return "Cannot divide by zero"from langchain.utils.openai_functions import convert_pydantic_to_openai_functionfrom langchain.pydantic_v1 import BaseModelfrom pprint import pprintopenai_function_def = convert_pydantic_to_openai_function(Calculator)pprint(openai_function_def) {'description': 'A calculator function', 'name': 'Calculator', 'parameters': {'description': 'A calculator function', 'properties': {'num1': {'title': 'Num1', 'type': 'number'}, 'num2': {'title': 'Num2', 'type': 'number'}, 'operation': {'allOf': [{'description': 'An ' 'enumeration.', 'enum': ['+', '-', '*', '/'], 'title': 'Operation'}], 'description': '+,-,*,/'}}, 'required':
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. ->: simple function-calling chain.from langchain.pydantic_v1 import BaseModel, Fieldfrom enum import Enumclass Operation(Enum): add = "+" subtract = "-" multiply = "*" divide = "/"class Calculator(BaseModel): """A calculator function""" num1: float num2: float operation: Operation = Field(..., description="+,-,*,/") def calculate(self): if self.operation == Operation.add: return self.num1 + self.num2 elif self.operation == Operation.subtract: return self.num1 - self.num2 elif self.operation == Operation.multiply: return self.num1 * self.num2 elif self.operation == Operation.divide: if self.num2 != 0: return self.num1 / self.num2 else: return "Cannot divide by zero"from langchain.utils.openai_functions import convert_pydantic_to_openai_functionfrom langchain.pydantic_v1 import BaseModelfrom pprint import pprintopenai_function_def = convert_pydantic_to_openai_function(Calculator)pprint(openai_function_def) {'description': 'A calculator function', 'name': 'Calculator', 'parameters': {'description': 'A calculator function', 'properties': {'num1': {'title': 'Num1', 'type': 'number'}, 'num2': {'title': 'Num2', 'type': 'number'}, 'operation': {'allOf': [{'description': 'An ' 'enumeration.', 'enum': ['+', '-', '*', '/'], 'title': 'Operation'}], 'description': '+,-,*,/'}}, 'required':
3,932
'+,-,*,/'}}, 'required': ['num1', 'num2', 'operation'], 'title': 'Calculator', 'type': 'object'}}from langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers.openai_functions import PydanticOutputFunctionsParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are an accounting assistant."), ("user", "{input}"), ])chain = ( prompt | ChatOpenAI().bind(functions=[openai_function_def]) | PydanticOutputFunctionsParser(pydantic_schema=Calculator) | (lambda x: x.calculate()))math_questions = [ "What's 45/9?", "What's 81/9?", "What's 72/8?", "What's 56/7?", "What's 36/6?", "What's 64/8?", "What's 12*6?", "What's 8*8?", "What's 10*10?", "What's 11*11?", "What's 13*13?", "What's 45+30?", "What's 72+28?", "What's 56+44?", "What's 63+37?", "What's 70-35?", "What's 60-30?", "What's 50-25?", "What's 40-20?", "What's 30-15?"]results = chain.batch([{"input": q} for q in math_questions], return_exceptions=True) Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..Load runs that did not error‚ÄãNow we can select the successful runs to fine-tune on.from langsmith.client import Clientclient = Client()successful_traces = { run.trace_id for run in client.list_runs( project_name=project_name, execution_order=1, error=False, )} llm_runs = [ run for run in client.list_runs( project_name=project_name, run_type="llm", ) if run.trace_id in successful_traces]2. Prepare data‚ÄãNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithRunChatLoaderloader =
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. ->: '+,-,*,/'}}, 'required': ['num1', 'num2', 'operation'], 'title': 'Calculator', 'type': 'object'}}from langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers.openai_functions import PydanticOutputFunctionsParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are an accounting assistant."), ("user", "{input}"), ])chain = ( prompt | ChatOpenAI().bind(functions=[openai_function_def]) | PydanticOutputFunctionsParser(pydantic_schema=Calculator) | (lambda x: x.calculate()))math_questions = [ "What's 45/9?", "What's 81/9?", "What's 72/8?", "What's 56/7?", "What's 36/6?", "What's 64/8?", "What's 12*6?", "What's 8*8?", "What's 10*10?", "What's 11*11?", "What's 13*13?", "What's 45+30?", "What's 72+28?", "What's 56+44?", "What's 63+37?", "What's 70-35?", "What's 60-30?", "What's 50-25?", "What's 40-20?", "What's 30-15?"]results = chain.batch([{"input": q} for q in math_questions], return_exceptions=True) Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..Load runs that did not error‚ÄãNow we can select the successful runs to fine-tune on.from langsmith.client import Clientclient = Client()successful_traces = { run.trace_id for run in client.list_runs( project_name=project_name, execution_order=1, error=False, )} llm_runs = [ run for run in client.list_runs( project_name=project_name, run_type="llm", ) if run.trace_id in successful_traces]2. Prepare data‚ÄãNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithRunChatLoaderloader =
3,933
import LangSmithRunChatLoaderloader = LangSmithRunChatLoader(runs=llm_runs)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.​from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the model​Now, initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 346.26s. 31.70s4. Use in LangChain​After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)(prompt | model).invoke({"input": "What's 56/7?"}) AIMessage(content='{\n "num1": 56,\n "num2": 7,\n "operation": "/"\n}')Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousFine-Tuning on LangSmith Chat DatasetsNextSlackPrerequisites1. Select Runs2. Prepare data3. Fine-tune the model4. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
This notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data. ->: import LangSmithRunChatLoaderloader = LangSmithRunChatLoader(runs=llm_runs)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.​from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the model​Now, initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 346.26s. 31.70s4. Use in LangChain​After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)(prompt | model).invoke({"input": "What's 56/7?"}) AIMessage(content='{\n "num1": 56,\n "num2": 7,\n "operation": "/"\n}')Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousFine-Tuning on LangSmith Chat DatasetsNextSlackPrerequisites1. Select Runs2. Prepare data3. Fine-tune the model4. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,934
iMessage | 🦜️🔗 Langchain
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: iMessage | 🦜️🔗 Langchain
3,935
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersiMessageOn this pageiMessageThis notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat.db (at least for macOS Ventura 13.4).
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersiMessageOn this pageiMessageThis notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat.db (at least for macOS Ventura 13.4).
3,936
The IMessageChatLoader loads from this database file. Create the IMessageChatLoader with the file path pointed to chat.db database you'd like to process.Call loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Access Chat DB‚ÄãIt's likely that your terminal is denied access to ~/Library/Messages. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Security and Privacy > Full Disk Access.We have created an example database you can use at this linked drive file.# This uses some example dataimport requestsdef download_drive_file(url: str, output_path: str = 'chat.db') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File {output_path} downloaded.')url = 'https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing'# Download file to chat.dbdownload_drive_file(url) File chat.db downloaded.2. Create the Chat Loader‚ÄãProvide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.imessage import IMessageChatLoaderloader = IMessageChatLoader( path="./chat.db",)3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation. All messages are mapped to "HumanMessage"
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: The IMessageChatLoader loads from this database file. Create the IMessageChatLoader with the file path pointed to chat.db database you'd like to process.Call loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Access Chat DB‚ÄãIt's likely that your terminal is denied access to ~/Library/Messages. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Security and Privacy > Full Disk Access.We have created an example database you can use at this linked drive file.# This uses some example dataimport requestsdef download_drive_file(url: str, output_path: str = 'chat.db') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File {output_path} downloaded.')url = 'https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing'# Download file to chat.dbdownload_drive_file(url) File chat.db downloaded.2. Create the Chat Loader‚ÄãProvide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.imessage import IMessageChatLoaderloader = IMessageChatLoader( path="./chat.db",)3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation. All messages are mapped to "HumanMessage"
3,937
All messages are mapped to "HumanMessage" objects to start. You can optionally choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?chat_sessions: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Tortoise"))# Now all of the Tortoise's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False), HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False), AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]3. Prepare for fine-tuning‚ÄãNow it's time to convert our chat messages to OpenAI dictionaries. We can use the convert_messages_for_finetuning utility to do so.from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 10 dialogues for training4. Fine-tune the model‚ÄãIt's time to fine-tune the model. Make sure you have openai installed
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: All messages are mapped to "HumanMessage" objects to start. You can optionally choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?chat_sessions: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Tortoise"))# Now all of the Tortoise's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False), HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False), AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]3. Prepare for fine-tuning‚ÄãNow it's time to convert our chat messages to OpenAI dictionaries. We can use the convert_messages_for_finetuning utility to do so.from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 10 dialogues for training4. Fine-tune the model‚ÄãIt's time to fine-tune the model. Make sure you have openai installed
3,938
and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_data: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 524.95sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7sKoRdlz5. Use in LangChain‚ÄãYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are speaking to hare."), ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_data: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 524.95sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7sKoRdlz5. Use in LangChain‚ÄãYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are speaking to hare."), ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in
3,939
= prompt | model | StrOutputParser()for tok in chain.stream({"input": "What's the golden thread?"}): print(tok, end="", flush=True) A symbol of interconnectedness.PreviousGMailNextFine-Tuning on LangSmith Chat Datasets1. Access Chat DB2. Create the Chat Loader3. Load messages3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.
This notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages. ->: = prompt | model | StrOutputParser()for tok in chain.stream({"input": "What's the golden thread?"}): print(tok, end="", flush=True) A symbol of interconnectedness.PreviousGMailNextFine-Tuning on LangSmith Chat Datasets1. Access Chat DB2. Create the Chat Loader3. Load messages3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,940
WhatsApp | 🦜️🔗 Langchain
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages. ->: WhatsApp | 🦜️🔗 Langchain
3,941
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersWhatsAppOn this pageWhatsAppThis notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat conversations to computerCreate the WhatsAppChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​To make the export of your WhatsApp conversation(s), complete the following steps:Open the target conversationClick the three dots in the top right corner and select "More".Then select "Export chat" and choose "Without media".An example of the data format for each conversation is below: whatsapp_chat.txt[8/15/23, 9:12:33 AM] Dr. Feather: ‎Messages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!‎[8/15/23, 9:12:48 AM] Dr. Feather: ‎image omitted[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?‎[8/15/23, 9:13:23 AM] Dr. Feather: ‎image omitted[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?‎[8/15/23, 9:14:30 AM] Dr. Feather: ‎image omitted[8/15/23, 9:14:50
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersWhatsAppOn this pageWhatsAppThis notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat conversations to computerCreate the WhatsAppChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​To make the export of your WhatsApp conversation(s), complete the following steps:Open the target conversationClick the three dots in the top right corner and select "More".Then select "Export chat" and choose "Without media".An example of the data format for each conversation is below: whatsapp_chat.txt[8/15/23, 9:12:33 AM] Dr. Feather: ‎Messages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!‎[8/15/23, 9:12:48 AM] Dr. Feather: ‎image omitted[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?‎[8/15/23, 9:13:23 AM] Dr. Feather: ‎image omitted[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?‎[8/15/23, 9:14:30 AM] Dr. Feather: ‎image omitted[8/15/23, 9:14:50
3,942
Dr. Feather: ‚Äéimage omitted[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.2. Create the Chat Loader‚ÄãThe WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat .txt files therein.Provide that as well as the user name you want to take on the role of "AI" when fine-tuning.from langchain.chat_loaders.whatsapp import WhatsAppChatLoaderloader = WhatsAppChatLoader( path="./whatsapp_chat.txt", )3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently store the list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Dr. Feather" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Dr. Feather")) [{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:12:43 AM'}]}, example=False), HumanMessage(content="That's stunning! Were you able to observe its behavior?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:13:15 AM'}]}, example=False), AIMessage(content="Yes, it seemed quite social with other macaws. They're known for their playful nature.", additional_kwargs={'sender': 'Dr. Feather', 'events':
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages. ->: Dr. Feather: ‚Äéimage omitted[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.2. Create the Chat Loader‚ÄãThe WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat .txt files therein.Provide that as well as the user name you want to take on the role of "AI" when fine-tuning.from langchain.chat_loaders.whatsapp import WhatsAppChatLoaderloader = WhatsAppChatLoader( path="./whatsapp_chat.txt", )3. Load messages‚ÄãThe load() (or lazy_load) methods return a list of "ChatSessions" that currently store the list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Dr. Feather" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Dr. Feather")) [{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:12:43 AM'}]}, example=False), HumanMessage(content="That's stunning! Were you able to observe its behavior?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:13:15 AM'}]}, example=False), AIMessage(content="Yes, it seemed quite social with other macaws. They're known for their playful nature.", additional_kwargs={'sender': 'Dr. Feather', 'events':
3,943
'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:02 AM'}]}, example=False), HumanMessage(content="How's the research going on parrot communication?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:14:15 AM'}]}, example=False), AIMessage(content="It's progressing well. We're learning so much about how they use sound and color to communicate.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:50 AM'}]}, example=False), HumanMessage(content="That's fascinating! Can't wait to read your paper on it.", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:15:10 AM'}]}, example=False), AIMessage(content="Thank you! I'll send you a draft soon.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:15:20 AM'}]}, example=False), HumanMessage(content='Looking forward to it! Keep up the great work.', additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:25:16 PM'}]}, example=False)]}]Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you for the encouragement! I'll do my best to continue studying and sharing fascinating insights about parrot communication.PreviousWeChat1. Create message dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.
This notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages. ->: 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:02 AM'}]}, example=False), HumanMessage(content="How's the research going on parrot communication?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:14:15 AM'}]}, example=False), AIMessage(content="It's progressing well. We're learning so much about how they use sound and color to communicate.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:50 AM'}]}, example=False), HumanMessage(content="That's fascinating! Can't wait to read your paper on it.", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:15:10 AM'}]}, example=False), AIMessage(content="Thank you! I'll send you a draft soon.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:15:20 AM'}]}, example=False), HumanMessage(content='Looking forward to it! Keep up the great work.', additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:25:16 PM'}]}, example=False)]}]Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you for the encouragement! I'll do my best to continue studying and sharing fascinating insights about parrot communication.PreviousWeChat1. Create message dump2. Create the Chat Loader3. Load messagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,944
Facebook Messenger | 🦜️🔗 Langchain
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: Facebook Messenger | 🦜️🔗 Langchain
3,945
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFacebook MessengerOn this pageFacebook MessengerThis notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:Download your messenger data to disk.Create the Chat Loader and call loader.load() (or loader.lazy_load()) to perform the conversion.Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class. Once you've done this, call convert_messages_for_finetuning to prepare your data for fine-tuning.Once this has been done, you can fine-tune your model. To do so you would complete the following steps:Upload your messages to OpenAI and run a fine-tuning job.Use the resulting model in your LangChain app!Let's begin.1. Download Data​To download your own messenger data, following instructions here. IMPORTANT - make sure to download them in JSON format (not HTML).We are hosting an example dump at this google drive link that we will use in this walkthrough.# This uses some example dataimport requestsimport zipfiledef download_and_unzip(url: str, output_path: str = 'file.zip') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFacebook MessengerOn this pageFacebook MessengerThis notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:Download your messenger data to disk.Create the Chat Loader and call loader.load() (or loader.lazy_load()) to perform the conversion.Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class. Once you've done this, call convert_messages_for_finetuning to prepare your data for fine-tuning.Once this has been done, you can fine-tune your model. To do so you would complete the following steps:Upload your messages to OpenAI and run a fine-tuning job.Use the resulting model in your LangChain app!Let's begin.1. Download Data​To download your own messenger data, following instructions here. IMPORTANT - make sure to download them in JSON format (not HTML).We are hosting an example dump at this google drive link that we will use in this walkthrough.# This uses some example dataimport requestsimport zipfiledef download_and_unzip(url: str, output_path: str = 'file.zip') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File
3,946
file.write(response.content) print(f'File {output_path} downloaded.') with zipfile.ZipFile(output_path, 'r') as zip_ref: zip_ref.extractall() print(f'File {output_path} has been unzipped.')# URL of the file to downloadurl = 'https://drive.google.com/file/d/1rh1s1o2i7B-Sk1v9o8KNgivLVGwJ-osV/view?usp=sharing'# Download and unzipdownload_and_unzip(url) File file.zip downloaded. File file.zip has been unzipped.2. Create Chat Loader‚ÄãWe have 2 different FacebookMessengerChatLoader classes, one for an entire directory of chats, and one to load individual files. Wedirectory_path = "./hogwarts"from langchain.chat_loaders.facebook_messenger import ( SingleFileFacebookMessengerChatLoader, FolderFacebookMessengerChatLoader,)loader = SingleFileFacebookMessengerChatLoader( path="./hogwarts/inbox/HermioneGranger/messages_Hermione_Granger.json",)chat_session = loader.load()[0]chat_session["messages"][:3] [HumanMessage(content="Hi Hermione! How's your summer going so far?", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="Harry! Lovely to hear from you. My summer is going well, though I do miss everyone. I'm spending most of my time going through my books and researching fascinating new topics. How about you?", additional_kwargs={'sender': 'Hermione Granger'}, example=False), HumanMessage(content="I miss you all too. The Dursleys are being their usual unpleasant selves but I'm getting by. At least I can practice some spells in my room without them knowing. Let me know if you find anything good in your researching!", additional_kwargs={'sender': 'Harry Potter'}, example=False)]loader = FolderFacebookMessengerChatLoader( path="./hogwarts",)chat_sessions = loader.load()len(chat_sessions) 93. Prepare for fine-tuning‚ÄãCalling load() returns all the chat messages we could extract as human messages. When conversing with chat bots, conversations typically follow a more strict alternating dialogue
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: file.write(response.content) print(f'File {output_path} downloaded.') with zipfile.ZipFile(output_path, 'r') as zip_ref: zip_ref.extractall() print(f'File {output_path} has been unzipped.')# URL of the file to downloadurl = 'https://drive.google.com/file/d/1rh1s1o2i7B-Sk1v9o8KNgivLVGwJ-osV/view?usp=sharing'# Download and unzipdownload_and_unzip(url) File file.zip downloaded. File file.zip has been unzipped.2. Create Chat Loader‚ÄãWe have 2 different FacebookMessengerChatLoader classes, one for an entire directory of chats, and one to load individual files. Wedirectory_path = "./hogwarts"from langchain.chat_loaders.facebook_messenger import ( SingleFileFacebookMessengerChatLoader, FolderFacebookMessengerChatLoader,)loader = SingleFileFacebookMessengerChatLoader( path="./hogwarts/inbox/HermioneGranger/messages_Hermione_Granger.json",)chat_session = loader.load()[0]chat_session["messages"][:3] [HumanMessage(content="Hi Hermione! How's your summer going so far?", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="Harry! Lovely to hear from you. My summer is going well, though I do miss everyone. I'm spending most of my time going through my books and researching fascinating new topics. How about you?", additional_kwargs={'sender': 'Hermione Granger'}, example=False), HumanMessage(content="I miss you all too. The Dursleys are being their usual unpleasant selves but I'm getting by. At least I can practice some spells in my room without them knowing. Let me know if you find anything good in your researching!", additional_kwargs={'sender': 'Harry Potter'}, example=False)]loader = FolderFacebookMessengerChatLoader( path="./hogwarts",)chat_sessions = loader.load()len(chat_sessions) 93. Prepare for fine-tuning‚ÄãCalling load() returns all the chat messages we could extract as human messages. When conversing with chat bots, conversations typically follow a more strict alternating dialogue
3,947
follow a more strict alternating dialogue pattern relative to real conversations. You can choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from langchain.chat_loaders.utils import ( merge_chat_runs, map_ai_messages,)merged_sessions = merge_chat_runs(chat_sessions)alternating_sessions = list(map_ai_messages(merged_sessions, "Harry Potter"))# Now all of Harry Potter's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately.", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="What is it, Potter? I'm quite busy at the moment.", additional_kwargs={'sender': 'Severus Snape'}, example=False), AIMessage(content="I apologize for the interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister.", additional_kwargs={'sender': 'Harry Potter'}, example=False)]Now we can convert to OpenAI format dictionaries‚Äãfrom langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 9 dialogues for trainingtraining_data[0][:3] [{'role': 'assistant', 'content': "Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately."}, {'role': 'user', 'content': "What is it, Potter? I'm quite busy at the moment."}, {'role': 'assistant', 'content': "I apologize for the interruption, sir. I'll be brief. I've noticed
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: follow a more strict alternating dialogue pattern relative to real conversations. You can choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from langchain.chat_loaders.utils import ( merge_chat_runs, map_ai_messages,)merged_sessions = merge_chat_runs(chat_sessions)alternating_sessions = list(map_ai_messages(merged_sessions, "Harry Potter"))# Now all of Harry Potter's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately.", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="What is it, Potter? I'm quite busy at the moment.", additional_kwargs={'sender': 'Severus Snape'}, example=False), AIMessage(content="I apologize for the interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister.", additional_kwargs={'sender': 'Harry Potter'}, example=False)]Now we can convert to OpenAI format dictionaries‚Äãfrom langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 9 dialogues for trainingtraining_data[0][:3] [{'role': 'assistant', 'content': "Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately."}, {'role': 'user', 'content': "What is it, Potter? I'm quite busy at the moment."}, {'role': 'assistant', 'content': "I apologize for the interruption, sir. I'll be brief. I've noticed
3,948
interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister."}]OpenAI currently requires at least 10 training examples for a fine-tuning job, though they recommend between 50-100 for most tasks. Since we only have 9 chat sessions, we can subdivide them (optionally with some overlap) so that each training example is comprised of a portion of a whole conversation.Facebook chat sessions (1 per person) often span multiple days and conversations,
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister."}]OpenAI currently requires at least 10 training examples for a fine-tuning job, though they recommend between 50-100 for most tasks. Since we only have 9 chat sessions, we can subdivide them (optionally with some overlap) so that each training example is comprised of a portion of a whole conversation.Facebook chat sessions (1 per person) often span multiple days and conversations,
3,949
so the long-range dependencies may not be that important to model anyhow.# Our chat is alternating, we will make each datapoint a group of 8 messages,# with 2 messages overlappingchunk_size = 8overlap = 2training_examples = [ conversation_messages[i: i + chunk_size] for conversation_messages in training_data for i in range( 0, len(conversation_messages) - chunk_size + 1, chunk_size - overlap)]len(training_examples) 1004. Fine-tune the model‚ÄãIt's time to fine-tune the model. Make sure you have openai installed
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: so the long-range dependencies may not be that important to model anyhow.# Our chat is alternating, we will make each datapoint a group of 8 messages,# with 2 messages overlappingchunk_size = 8overlap = 2training_examples = [ conversation_messages[i: i + chunk_size] for conversation_messages in training_data for i in range( 0, len(conversation_messages) - chunk_size + 1, chunk_size - overlap)]len(training_examples) 1004. Fine-tune the model‚ÄãIt's time to fine-tune the model. Make sure you have openai installed
3,950
and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_examples: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zCyNBeg4snpbBL7VkvsuhCz8 ready afer 30.55 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 908.87sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7rDwkaOq5. Use in LangChain‚ÄãYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in chain.stream({"input": "What classes are you
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_examples: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zCyNBeg4snpbBL7VkvsuhCz8 ready afer 30.55 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 908.87sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7rDwkaOq5. Use in LangChain‚ÄãYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in chain.stream({"input": "What classes are you
3,951
in chain.stream({"input": "What classes are you taking?"}): print(tok, end="", flush=True) The usual - Potions, Transfiguration, Defense Against the Dark Arts. What about you?PreviousDiscordNextGMail1. Download Data2. Create Chat Loader3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:
This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are: ->: in chain.stream({"input": "What classes are you taking?"}): print(tok, end="", flush=True) The usual - Potions, Transfiguration, Defense Against the Dark Arts. What about you?PreviousDiscordNextGMail1. Download Data2. Create Chat Loader3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,952
GMail | 🦜️🔗 Langchain
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email. ->: GMail | 🦜️🔗 Langchain
3,953
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersGMailGMailThis loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.To use:Set up a Google Developer Account: Go to the Google Developer Console, create a project, and enable the Gmail API for that project. This will give you a credentials.json file that you'll need later.Install the Google Client Library: Run the following command to install the Google Client Library:pip install --upgrade google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-clientimport os.pathimport base64import jsonimport reimport timefrom google.auth.transport.requests import Requestfrom google.oauth2.credentials import Credentialsfrom google_auth_oauthlib.flow import InstalledAppFlowfrom googleapiclient.discovery import buildimport loggingimport requestsSCOPES = ['https://www.googleapis.com/auth/gmail.readonly']creds = None# The file token.json stores the user's access and refresh tokens, and is# created automatically when the authorization flow completes for the first# time.if
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersGMailGMailThis loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.To use:Set up a Google Developer Account: Go to the Google Developer Console, create a project, and enable the Gmail API for that project. This will give you a credentials.json file that you'll need later.Install the Google Client Library: Run the following command to install the Google Client Library:pip install --upgrade google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-clientimport os.pathimport base64import jsonimport reimport timefrom google.auth.transport.requests import Requestfrom google.oauth2.credentials import Credentialsfrom google_auth_oauthlib.flow import InstalledAppFlowfrom googleapiclient.discovery import buildimport loggingimport requestsSCOPES = ['https://www.googleapis.com/auth/gmail.readonly']creds = None# The file token.json stores the user's access and refresh tokens, and is# created automatically when the authorization flow completes for the first# time.if
3,954
flow completes for the first# time.if os.path.exists('email_token.json'): creds = Credentials.from_authorized_user_file('email_token.json', SCOPES)# If there are no (valid) credentials available, let the user log in.if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started 'creds.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('email_token.json', 'w') as token: token.write(creds.to_json())from langchain.chat_loaders.gmail import GMailLoaderloader = GMailLoader(creds=creds, n=3)data = loader.load()# Sometimes there can be errors which we silently ignorelen(data) 2from langchain.chat_loaders.utils import ( map_ai_messages,)# This makes messages sent by [email protected] the AI Messages# This means you will train an LLM to predict as if it's responding as hchasetraining_data = list(map_ai_messages(data, sender="Harrison Chase <[email protected]>"))PreviousFacebook MessengerNextiMessageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.
This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email. ->: flow completes for the first# time.if os.path.exists('email_token.json'): creds = Credentials.from_authorized_user_file('email_token.json', SCOPES)# If there are no (valid) credentials available, let the user log in.if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started 'creds.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('email_token.json', 'w') as token: token.write(creds.to_json())from langchain.chat_loaders.gmail import GMailLoaderloader = GMailLoader(creds=creds, n=3)data = loader.load()# Sometimes there can be errors which we silently ignorelen(data) 2from langchain.chat_loaders.utils import ( map_ai_messages,)# This makes messages sent by [email protected] the AI Messages# This means you will train an LLM to predict as if it's responding as hchasetraining_data = list(map_ai_messages(data, sender="Harrison Chase <[email protected]>"))PreviousFacebook MessengerNextiMessageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
3,955
WeChat | 🦜�🔗 Langchain
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages. ->: WeChat | 🦜�🔗 Langchain
3,956
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersWeChatOn this pageWeChatThere is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discordThe process has five steps:Open your chat in the WeChat desktop app. Select messages you need by mouse-dragging or right-click. Due to restrictions, you can select up to 100 messages once a time. CMD/Ctrl + C to copy.Create the chat .txt file by pasting selected messages in a file on your local computer.Copy the chat loader definition from below to a local file.Initialize the WeChatChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.wechat_chats.txt女朋� 2023/09/16 2:51 PM天气有点凉男朋� 2023/09/16 2:51 PM�簟凉�著,瑶�寄�生。嵇�懒书札,底物慰秋情。女朋� 2023/09/16 3:06 PM忙什么呢男朋� 2023/09/16 3:06 PM今天�干�了一件�样的事那就是想你女朋� 2023/09/16 3:06 PM[动画表情]2. Define chat loader​LangChain currently does not support import
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersWeChatOn this pageWeChatThere is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discordThe process has five steps:Open your chat in the WeChat desktop app. Select messages you need by mouse-dragging or right-click. Due to restrictions, you can select up to 100 messages once a time. CMD/Ctrl + C to copy.Create the chat .txt file by pasting selected messages in a file on your local computer.Copy the chat loader definition from below to a local file.Initialize the WeChatChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Create message dump​This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.wechat_chats.txt女朋� 2023/09/16 2:51 PM天气有点凉男朋� 2023/09/16 2:51 PM�簟凉�著,瑶�寄�生。嵇�懒书札,底物慰秋情。女朋� 2023/09/16 3:06 PM忙什么呢男朋� 2023/09/16 3:06 PM今天�干�了一件�样的事那就是想你女朋� 2023/09/16 3:06 PM[动画表情]2. Define chat loader​LangChain currently does not support import
3,957
currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import HumanMessage, BaseMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class WeChatChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(?P<sender>.+?) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{1,2}:\d{2} (?:AM|PM))", # noqa # flags=re.DOTALL, ) def _append_message_to_results( self, results: List, current_sender: str, current_timestamp: str, current_content: List[str], ): content = "\n".join(current_content).strip() # skip non-text messages like stickers, images, etc. if not re.match(r"\[.*\]", content): results.append( HumanMessage( content=content, additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return results def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match(self._message_line_regex, line): if current_sender
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages. ->: currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import HumanMessage, BaseMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class WeChatChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(?P<sender>.+?) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{1,2}:\d{2} (?:AM|PM))", # noqa # flags=re.DOTALL, ) def _append_message_to_results( self, results: List, current_sender: str, current_timestamp: str, current_content: List[str], ): content = "\n".join(current_content).strip() # skip non-text messages like stickers, images, etc. if not re.match(r"\[.*\]", content): results.append( HumanMessage( content=content, additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return results def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match(self._message_line_regex, line): if current_sender
3,958
line): if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) current_sender, current_timestamp = re.match(self._message_line_regex, line).groups() current_content = [] else: current_content.append(line.strip()) if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loader​We will point to the file we just wrote to disk.loader = WeChatChatLoader( path="./wechat_chats.txt",)3. Load Messages​Assuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "男朋�" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="男朋�"))messages [{'messages': [HumanMessage(content='天气有点凉', additional_kwargs={'sender': '女朋�', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), AIMessage(content='�簟凉�著,瑶�寄�生。嵇�懒书札,底物慰秋情。', additional_kwargs={'sender': '男朋�', 'events': [{'message_time':
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages. ->: line): if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) current_sender, current_timestamp = re.match(self._message_line_regex, line).groups() current_content = [] else: current_content.append(line.strip()) if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loader​We will point to the file we just wrote to disk.loader = WeChatChatLoader( path="./wechat_chats.txt",)3. Load Messages​Assuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "男朋�" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="男朋�"))messages [{'messages': [HumanMessage(content='天气有点凉', additional_kwargs={'sender': '女朋�', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), AIMessage(content='�簟凉�著,瑶�寄�生。嵇�懒书札,底物慰秋情。', additional_kwargs={'sender': '男朋�', 'events': [{'message_time':
3,959
'男朋�', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), HumanMessage(content='忙什么呢', additional_kwargs={'sender': '女朋�', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False), AIMessage(content='今天�干�了一件�样的事\n那就是想你', additional_kwargs={'sender': '男朋�', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False)]}]Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True)PreviousTwitter (via Apify)NextWhatsApp1. Create message dump2. Define chat loader2. Create loader3. Load MessagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.
There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages. ->: '男朋�', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), HumanMessage(content='忙什么呢', additional_kwargs={'sender': '女朋�', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False), AIMessage(content='今天�干�了一件�样的事\n那就是想你', additional_kwargs={'sender': '男朋�', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False)]}]Next Steps​You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True)PreviousTwitter (via Apify)NextWhatsApp1. Create message dump2. Define chat loader2. Create loader3. Load MessagesNext StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,960
Fine-Tuning on LangSmith Chat Datasets | 🦜️🔗 Langchain
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. ->: Fine-Tuning on LangSmith Chat Datasets | 🦜️🔗 Langchain
3,961
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFine-Tuning on LangSmith Chat DatasetsOn this pageFine-Tuning on LangSmith Chat DatasetsThis notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersFine-Tuning on LangSmith Chat DatasetsOn this pageFine-Tuning on LangSmith Chat DatasetsThis notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
3,962
The process is simple and comprises 3 steps.Create the chat dataset.Use the LangSmithDatasetChatLoader to load examples.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.Prerequisites‚ÄãEnsure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"1. Select dataset‚ÄãThis notebook fine-tunes a model directly on a selecting which runs to fine-tune on. You will often curate these from traced runs. You can learn more about LangSmith datasets in the docs docs.For the sake of this tutorial, we will upload an existing dataset here that you can use.from langsmith.client import Clientclient = Client()import requestsurl = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/integrations/chat_loaders/example_data/langsmith_chat_dataset.json"response = requests.get(url)response.raise_for_status()data = response.json()dataset_name = f"Extraction Fine-tuning Dataset {uid}"ds = client.create_dataset(dataset_name=dataset_name, data_type="chat")_ = client.create_examples( inputs = [e['inputs'] for e in data], outputs = [e['outputs'] for e in data], dataset_id=ds.id,)2. Prepare Data‚ÄãNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithDatasetChatLoaderloader = LangSmithDatasetChatLoader(dataset_name=dataset_name)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.‚Äãfrom langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the Model‚ÄãNow, initiate the fine-tuning process using the OpenAI
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. ->: The process is simple and comprises 3 steps.Create the chat dataset.Use the LangSmithDatasetChatLoader to load examples.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.Prerequisites‚ÄãEnsure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"1. Select dataset‚ÄãThis notebook fine-tunes a model directly on a selecting which runs to fine-tune on. You will often curate these from traced runs. You can learn more about LangSmith datasets in the docs docs.For the sake of this tutorial, we will upload an existing dataset here that you can use.from langsmith.client import Clientclient = Client()import requestsurl = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/integrations/chat_loaders/example_data/langsmith_chat_dataset.json"response = requests.get(url)response.raise_for_status()data = response.json()dataset_name = f"Extraction Fine-tuning Dataset {uid}"ds = client.create_dataset(dataset_name=dataset_name, data_type="chat")_ = client.create_examples( inputs = [e['inputs'] for e in data], outputs = [e['outputs'] for e in data], dataset_id=ds.id,)2. Prepare Data‚ÄãNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithDatasetChatLoaderloader = LangSmithDatasetChatLoader(dataset_name=dataset_name)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.‚Äãfrom langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the Model‚ÄãNow, initiate the fine-tuning process using the OpenAI
3,963
initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 302.42s. 143.85s4. Use in LangChain​After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)model.invoke("There were three ravens sat on a tree.")Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousiMessageNextFine-Tuning on LangSmith LLM RunsPrerequisites1. Select dataset2. Prepare Data3. Fine-tune the Model4. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
This notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data. ->: initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 302.42s. 143.85s4. Use in LangChain​After fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)model.invoke("There were three ravens sat on a tree.")Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousiMessageNextFine-Tuning on LangSmith LLM RunsPrerequisites1. Select dataset2. Prepare Data3. Fine-tune the Model4. Use in LangChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,964
Twitter (via Apify) | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersTwitter (via Apify)Twitter (via Apify)This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify. First, use Apify to export tweets. An exampleimport jsonfrom langchain.schema import AIMessagefrom langchain.adapters.openai import convert_message_to_dictwith open('example_data/dataset_twitter-scraper_2023-08-23_22-13-19-740.json') as f: data = json.load(f)# Filter out tweets that reference other tweets, because it's a bit weirdtweets = [d["full_text"] for d in data if "t.co" not in d['full_text']]# Create them as AI messagesmessages = [AIMessage(content=t) for t in tweets]# Add in a system message at the start# TODO: we could try to extract the subject from the tweets, and put that in the system message.system_message = {"role": "system", "content": "write a tweet"}data = [[system_message, convert_message_to_dict(m)] for m in messages]PrevioustelegramNextWeChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify.
This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify. ->: Twitter (via Apify) | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersDiscordFacebook MessengerGMailiMessageFine-Tuning on LangSmith Chat DatasetsFine-Tuning on LangSmith LLM RunsSlacktelegramTwitter (via Apify)WeChatWhatsAppComponentsChat loadersTwitter (via Apify)Twitter (via Apify)This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify. First, use Apify to export tweets. An exampleimport jsonfrom langchain.schema import AIMessagefrom langchain.adapters.openai import convert_message_to_dictwith open('example_data/dataset_twitter-scraper_2023-08-23_22-13-19-740.json') as f: data = json.load(f)# Filter out tweets that reference other tweets, because it's a bit weirdtweets = [d["full_text"] for d in data if "t.co" not in d['full_text']]# Create them as AI messagesmessages = [AIMessage(content=t) for t in tweets]# Add in a system message at the start# TODO: we could try to extract the subject from the tweets, and put that in the system message.system_message = {"role": "system", "content": "write a tweet"}data = [[system_message, convert_message_to_dict(m)] for m in messages]PrevioustelegramNextWeChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,965
Anthropic | 🦜️🔗 Langchain
This notebook covers how to get started with Anthropic chat models.
This notebook covers how to get started with Anthropic chat models. ->: Anthropic | 🦜️🔗 Langchain
3,966
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsAnthropicOn this pageAnthropicThis notebook covers how to get started with Anthropic chat models.from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatAnthropic()messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)ChatAnthropic also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])chat = ChatAnthropic( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages) J'aime la programmation. AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)PreviousChat
This notebook covers how to get started with Anthropic chat models.
This notebook covers how to get started with Anthropic chat models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsAnthropicOn this pageAnthropicThis notebook covers how to get started with Anthropic chat models.from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatAnthropic()messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)ChatAnthropic also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])chat = ChatAnthropic( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages) J'aime la programmation. AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)PreviousChat
3,967
additional_kwargs={}, example=False)PreviousChat modelsNextAnthropic FunctionsChatAnthropic also supports async and streaming functionality:CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to get started with Anthropic chat models.
This notebook covers how to get started with Anthropic chat models. ->: additional_kwargs={}, example=False)PreviousChat modelsNextAnthropic FunctionsChatAnthropic also supports async and streaming functionality:CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,968
Microsoft | 🦜️🔗 Langchain
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: Microsoft | 🦜️🔗 Langchain
3,969
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersMicrosoftOn this pageMicrosoftAll functionality related to Microsoft AzureLLM​Azure OpenAI​Microsoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.Azure OpenAI is an Azure service with powerful language models from OpenAI including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation.pip install openai tiktokenSet the environment variables to get access to the Azure OpenAI service.import osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"See a usage example.from langchain.llms import AzureOpenAIText Embedding Models​Azure OpenAI​See a usage examplefrom langchain.embeddings import OpenAIEmbeddingsChat Models​Azure OpenAI​See a usage examplefrom langchain.chat_models import AzureChatOpenAIDocument loaders​Azure Blob Storage​Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersMicrosoftOn this pageMicrosoftAll functionality related to Microsoft AzureLLM​Azure OpenAI​Microsoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.Azure OpenAI is an Azure service with powerful language models from OpenAI including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation.pip install openai tiktokenSet the environment variables to get access to the Azure OpenAI service.import osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"See a usage example.from langchain.llms import AzureOpenAIText Embedding Models​Azure OpenAI​See a usage examplefrom langchain.embeddings import OpenAIEmbeddingsChat Models​Azure OpenAI​See a usage examplefrom langchain.chat_models import AzureChatOpenAIDocument loaders​Azure Blob Storage​Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or
3,970
that doesn't adhere to a particular data model or definition, such as text or binary data.Azure Files offers fully managed
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: that doesn't adhere to a particular data model or definition, such as text or binary data.Azure Files offers fully managed
3,971
file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API. Azure Files are based on the Azure Blob Storage.Azure Blob Storage is designed for:Serving images or documents directly to a browser.Storing files for distributed access.Streaming video and audio.Writing to log files.Storing data for backup and restore, disaster recovery, and archiving.Storing data for analysis by an on-premises or Azure-hosted service.pip install azure-storage-blobSee a usage example for the Azure Blob Storage.from langchain.document_loaders import AzureBlobStorageContainerLoaderSee a usage example for the Azure Files.from langchain.document_loaders import AzureBlobStorageFileLoaderMicrosoft OneDrive‚ÄãMicrosoft OneDrive (formerly SkyDrive) is a file-hosting service operated by Microsoft.First, you need to install a python package.pip install o365See a usage example.from langchain.document_loaders import OneDriveLoaderMicrosoft Word‚ÄãMicrosoft Word is a word processor developed by Microsoft.See a usage example.from langchain.document_loaders import UnstructuredWordDocumentLoaderVector stores‚ÄãAzure Cosmos DB‚ÄãAzure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string.
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API. Azure Files are based on the Azure Blob Storage.Azure Blob Storage is designed for:Serving images or documents directly to a browser.Storing files for distributed access.Streaming video and audio.Writing to log files.Storing data for backup and restore, disaster recovery, and archiving.Storing data for analysis by an on-premises or Azure-hosted service.pip install azure-storage-blobSee a usage example for the Azure Blob Storage.from langchain.document_loaders import AzureBlobStorageContainerLoaderSee a usage example for the Azure Files.from langchain.document_loaders import AzureBlobStorageFileLoaderMicrosoft OneDrive‚ÄãMicrosoft OneDrive (formerly SkyDrive) is a file-hosting service operated by Microsoft.First, you need to install a python package.pip install o365See a usage example.from langchain.document_loaders import OneDriveLoaderMicrosoft Word‚ÄãMicrosoft Word is a word processor developed by Microsoft.See a usage example.from langchain.document_loaders import UnstructuredWordDocumentLoaderVector stores‚ÄãAzure Cosmos DB‚ÄãAzure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string.
3,972
Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.Installation and Setup‚ÄãSee detail configuration instructions.We need to install pymongo python package.pip install pymongoDeploy Azure Cosmos DB on Microsoft Azure‚ÄãAzure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.Sign Up for free to get started today.See a usage example.from langchain.vectorstores import AzureCosmosDBVectorSearchRetrievers‚ÄãAzure Cognitive Search‚ÄãAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)See set up instructions.See a usage example.from langchain.retrievers import AzureCognitiveSearchRetrieverPreviousGoogleNextOpenAILLMAzure OpenAIText Embedding ModelsAzure
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.Installation and Setup‚ÄãSee detail configuration instructions.We need to install pymongo python package.pip install pymongoDeploy Azure Cosmos DB on Microsoft Azure‚ÄãAzure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture.With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.Sign Up for free to get started today.See a usage example.from langchain.vectorstores import AzureCosmosDBVectorSearchRetrievers‚ÄãAzure Cognitive Search‚ÄãAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)See set up instructions.See a usage example.from langchain.retrievers import AzureCognitiveSearchRetrieverPreviousGoogleNextOpenAILLMAzure OpenAIText Embedding ModelsAzure
3,973
OpenAIText Embedding ModelsAzure OpenAIChat ModelsAzure OpenAIDocument loadersAzure Blob StorageMicrosoft OneDriveMicrosoft WordVector storesAzure Cosmos DBRetrieversAzure Cognitive SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
All functionality related to Microsoft Azure
All functionality related to Microsoft Azure ->: OpenAIText Embedding ModelsAzure OpenAIChat ModelsAzure OpenAIDocument loadersAzure Blob StorageMicrosoft OneDriveMicrosoft WordVector storesAzure Cosmos DBRetrieversAzure Cognitive SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,974
Azure OpenAI | 🦜️🔗 Langchain
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: Azure OpenAI | 🦜️🔗 Langchain
3,975
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure OpenAIOn this pageAzure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.The Azure OpenAI API is compatible with OpenAI's API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.API configuration​You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:# Set this to `azure`export OPENAI_API_TYPE=azure# The API version you want to use: set this to `2023-05-15` for the released version.export OPENAI_API_VERSION=2023-05-15# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure OpenAIOn this pageAzure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.The Azure OpenAI API is compatible with OpenAI's API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.API configuration​You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:# Set this to `azure`export OPENAI_API_TYPE=azure# The API version you want to use: set this to `2023-05-15` for the released version.export OPENAI_API_VERSION=2023-05-15# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export
3,976
portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure OpenAI API key>Alternatively, you can configure the API right within your running Python environment:import osos.environ["OPENAI_API_TYPE"] = "azure"Azure Active Directory Authentication‚ÄãThere are two ways you can authenticate to Azure OpenAI:API KeyAzure Active Directory (AAD)Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here.If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in.Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value.import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").tokenThe DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure OpenAI API key>Alternatively, you can configure the API right within your running Python environment:import osos.environ["OPENAI_API_TYPE"] = "azure"Azure Active Directory Authentication‚ÄãThere are two ways you can authenticate to Azure OpenAI:API KeyAzure Active Directory (AAD)Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here.If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in.Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value.import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").tokenThe DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then
3,977
shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential())Deployments‚ÄãWith Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.Let's say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:import openairesponse = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5)pip install openaiimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "..."os.environ["OPENAI_API_KEY"] = "..."# Import Azure OpenAIfrom langchain.llms import AzureOpenAI# Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}PreviousAzure MLNextBaidu QianfanAPI configurationAzure Active Directory
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential())Deployments‚ÄãWith Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.Let's say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:import openairesponse = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5)pip install openaiimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "..."os.environ["OPENAI_API_KEY"] = "..."# Import Azure OpenAIfrom langchain.llms import AzureOpenAI# Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}PreviousAzure MLNextBaidu QianfanAPI configurationAzure Active Directory
3,978
QianfanAPI configurationAzure Active Directory AuthenticationDeploymentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: QianfanAPI configurationAzure Active Directory AuthenticationDeploymentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,979
EverlyAI | 🦜️🔗 Langchain
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: EverlyAI | 🦜️🔗 Langchain
3,980
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsEverlyAIEverlyAIEverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.This notebook demonstrates the use of langchain.chat_models.ChatEverlyAI for EverlyAI Hosted Endpoints.Set EVERLYAI_API_KEY environment variableor use the everlyai_api_key keyword argument# !pip install openaiimport osfrom getpass import getpassos.environ["EVERLYAI_API_KEY"] = getpass()Let's try out LLAMA model offered on EverlyAI Hosted Endpointsfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagemessages = [ SystemMessage( content="You are a helpful AI that shares everything you know." ), HumanMessage( content="Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64)print(chat(messages).content) Hello! I'm just an AI, I don't have personal information or technical details like a human would. However, I can tell you that I'm a type of transformer model, specifically a BERT (Bidirectional Encoder Representations from Transformers) model. BEverlyAI also supports streaming responsesfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagefrom
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsChat modelsAnthropicAnthropic FunctionsAnyscaleAzureAzureML Chat Online EndpointBaichuan ChatBaidu QianfanBedrock ChatCohereERNIE-Bot ChatEverlyAIFireworksGCP Vertex AIJinaChatKonko🚅 LiteLLMLlama APIMiniMaxOllamaOpenAIPromptLayer ChatOpenAITongyi QwenvLLM ChatYandexGPTDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsChat modelsEverlyAIEverlyAIEverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.This notebook demonstrates the use of langchain.chat_models.ChatEverlyAI for EverlyAI Hosted Endpoints.Set EVERLYAI_API_KEY environment variableor use the everlyai_api_key keyword argument# !pip install openaiimport osfrom getpass import getpassos.environ["EVERLYAI_API_KEY"] = getpass()Let's try out LLAMA model offered on EverlyAI Hosted Endpointsfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagemessages = [ SystemMessage( content="You are a helpful AI that shares everything you know." ), HumanMessage( content="Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64)print(chat(messages).content) Hello! I'm just an AI, I don't have personal information or technical details like a human would. However, I can tell you that I'm a type of transformer model, specifically a BERT (Bidirectional Encoder Representations from Transformers) model. BEverlyAI also supports streaming responsesfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagefrom
3,981
import SystemMessage, HumanMessagefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlermessages = [ SystemMessage( content="You are a humorous AI that delights people." ), HumanMessage( content="Tell me a joke?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])chat(messages) Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks* *pauses for dramatic effect* Why did the AI go to therapy? *drumroll* Because AIMessageChunk(content=" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\n *pauses for dramatic effect*\nWhy did the AI go to therapy?\n*drumroll*\nBecause")Let's try a different language model on EverlyAIfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlermessages = [ SystemMessage( content="You are a humorous AI that delights people." ), HumanMessage( content="Tell me a joke?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-13b-chat-hf-quantized", temperature=0.3, max_tokens=128, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])chat(messages) OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks* You want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat* Why couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks* Hope that one put a spring in your step, my dear! * AIMessageChunk(content=" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\n\nYou want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\n\nWhy
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: import SystemMessage, HumanMessagefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlermessages = [ SystemMessage( content="You are a humorous AI that delights people." ), HumanMessage( content="Tell me a joke?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-7b-chat-hf", temperature=0.3, max_tokens=64, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])chat(messages) Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks* *pauses for dramatic effect* Why did the AI go to therapy? *drumroll* Because AIMessageChunk(content=" Ah, a joke, you say? *adjusts glasses* Well, I've got a doozy for you! *winks*\n *pauses for dramatic effect*\nWhy did the AI go to therapy?\n*drumroll*\nBecause")Let's try a different language model on EverlyAIfrom langchain.chat_models import ChatEverlyAIfrom langchain.schema import SystemMessage, HumanMessagefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlermessages = [ SystemMessage( content="You are a humorous AI that delights people." ), HumanMessage( content="Tell me a joke?" ),]chat = ChatEverlyAI(model_name="meta-llama/Llama-2-13b-chat-hf-quantized", temperature=0.3, max_tokens=128, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])chat(messages) OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks* You want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat* Why couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks* Hope that one put a spring in your step, my dear! * AIMessageChunk(content=" OH HO HO! *adjusts monocle* Well, well, well! Look who's here! *winks*\n\nYou want a joke, huh? *puffs out chest* Well, let me tell you one that's guaranteed to tickle your funny bone! *clears throat*\n\nWhy
3,982
to tickle your funny bone! *clears throat*\n\nWhy couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\n\nHope that one put a spring in your step, my dear! *")PreviousERNIE-Bot ChatNextFireworksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
EverlyAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: to tickle your funny bone! *clears throat*\n\nWhy couldn't the bicycle stand up by itself? *pauses for dramatic effect* Because it was two-tired! *winks*\n\nHope that one put a spring in your step, my dear! *")PreviousERNIE-Bot ChatNextFireworksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,983
OpenAI | 🦜️🔗 Langchain
All functionality related to OpenAI
All functionality related to OpenAI ->: OpenAI | 🦜️🔗 Langchain
3,984
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersOpenAIOn this pageOpenAIAll functionality related to OpenAIOpenAI is American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI. OpenAI systems run on an Azure-based supercomputing platform from Microsoft.The OpenAI API is powered by a diverse set of models with different capabilities and price points.ChatGPT is the Artificial Intelligence (AI) chatbot developed by OpenAI.Installation and Setup​Install the Python SDK withpip install openaiGet an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)If you want to use OpenAI's tokenizer (only available for Python 3.9+), install itpip install tiktokenLLM​See a usage example.from langchain.llms import OpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereChat model​See a usage example.from langchain.chat_models import ChatOpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureChatOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereText Embedding Model​See a usage examplefrom langchain.embeddings import OpenAIEmbeddingsTokenizer​There are several places you can use the tiktoken tokenizer. By default, it is used to count tokens
All functionality related to OpenAI
All functionality related to OpenAI ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersProvidersOpenAIOn this pageOpenAIAll functionality related to OpenAIOpenAI is American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI. OpenAI systems run on an Azure-based supercomputing platform from Microsoft.The OpenAI API is powered by a diverse set of models with different capabilities and price points.ChatGPT is the Artificial Intelligence (AI) chatbot developed by OpenAI.Installation and Setup​Install the Python SDK withpip install openaiGet an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)If you want to use OpenAI's tokenizer (only available for Python 3.9+), install itpip install tiktokenLLM​See a usage example.from langchain.llms import OpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereChat model​See a usage example.from langchain.chat_models import ChatOpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureChatOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereText Embedding Model​See a usage examplefrom langchain.embeddings import OpenAIEmbeddingsTokenizer​There are several places you can use the tiktoken tokenizer. By default, it is used to count tokens
3,985
for OpenAI LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_tiktoken_encoder(...)For a more detailed walkthrough of this, see this notebookDocument Loader​See a usage example.from langchain.document_loaders.chatgpt import ChatGPTLoaderRetriever​See a usage example.from langchain.retrievers import ChatGPTPluginRetrieverChain​See a usage example.from langchain.chains import OpenAIModerationChainPreviousMicrosoftNextActiveloop Deep LakeInstallation and SetupLLMChat modelText Embedding ModelTokenizerDocument LoaderRetrieverChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
All functionality related to OpenAI
All functionality related to OpenAI ->: for OpenAI LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_tiktoken_encoder(...)For a more detailed walkthrough of this, see this notebookDocument Loader​See a usage example.from langchain.document_loaders.chatgpt import ChatGPTLoaderRetriever​See a usage example.from langchain.retrievers import ChatGPTPluginRetrieverChain​See a usage example.from langchain.chains import OpenAIModerationChainPreviousMicrosoftNextActiveloop Deep LakeInstallation and SetupLLMChat modelText Embedding ModelTokenizerDocument LoaderRetrieverChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,986
Moderation chain | 🦜️🔗 Langchain
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: Moderation chain | 🦜️🔗 Langchain
3,987
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyPydantic compatibilitySafetyAmazon Comprehend Moderation ChainConstitutional chainHugging Face prompt injection identificationLogical Fallacy chainModeration chainMoreGuidesSafetyModeration chainOn this pageModeration chainThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could be other ways to handle it.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyPydantic compatibilitySafetyAmazon Comprehend Moderation ChainConstitutional chainHugging Face prompt injection identificationLogical Fallacy chainModeration chainMoreGuidesSafetyModeration chainOn this pageModeration chainThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could be other ways to handle it.
3,988
We will cover all these ways in this walkthrough.We'll show:How to run any piece of text through a moderation chain.How to append a Moderation chain to an LLMChain.from langchain.llms import OpenAIfrom langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChainfrom langchain.prompts import PromptTemplateHow to use the moderation chain‚ÄãHere's an example of using the moderation chain with default settings (will return a string
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: We will cover all these ways in this walkthrough.We'll show:How to run any piece of text through a moderation chain.How to append a Moderation chain to an LLMChain.from langchain.llms import OpenAIfrom langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChainfrom langchain.prompts import PromptTemplateHow to use the moderation chain‚ÄãHere's an example of using the moderation chain with default settings (will return a string
3,989
explaining stuff was flagged).moderation_chain = OpenAIModerationChain()moderation_chain.run("This is okay") 'This is okay'moderation_chain.run("I will kill you") "Text was found that violates OpenAI's content policy."Here's an example of using the moderation chain to throw an error.moderation_chain_error = OpenAIModerationChain(error=True)moderation_chain_error.run("This is okay") 'This is okay'moderation_chain_error.run("I will kill you") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 1 ----> 1 moderation_chain_error.run("I will kill you") File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs) 136 if len(args) != 1: 137 raise ValueError("`run` supports only one positional argument.") --> 138 return self(args[0])[self.output_keys[0]] 140 if kwargs and not args: 141 return self(kwargs)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs) 108 if self.verbose: 109 print( 110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m" 111 ) --> 112 outputs = self._call(inputs) 113 if self.verbose: 114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m") File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs) 79 text = inputs[self.input_key] 80 results = self.client.create(text) ---> 81 output = self._moderate(text, results["results"][0]) 82 return {self.output_key: output} File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results) 71 error_str = "Text was found that violates OpenAI's content policy." 72 if
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: explaining stuff was flagged).moderation_chain = OpenAIModerationChain()moderation_chain.run("This is okay") 'This is okay'moderation_chain.run("I will kill you") "Text was found that violates OpenAI's content policy."Here's an example of using the moderation chain to throw an error.moderation_chain_error = OpenAIModerationChain(error=True)moderation_chain_error.run("This is okay") 'This is okay'moderation_chain_error.run("I will kill you") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 1 ----> 1 moderation_chain_error.run("I will kill you") File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs) 136 if len(args) != 1: 137 raise ValueError("`run` supports only one positional argument.") --> 138 return self(args[0])[self.output_keys[0]] 140 if kwargs and not args: 141 return self(kwargs)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs) 108 if self.verbose: 109 print( 110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m" 111 ) --> 112 outputs = self._call(inputs) 113 if self.verbose: 114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m") File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs) 79 text = inputs[self.input_key] 80 results = self.client.create(text) ---> 81 output = self._moderate(text, results["results"][0]) 82 return {self.output_key: output} File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results) 71 error_str = "Text was found that violates OpenAI's content policy." 72 if
3,990
violates OpenAI's content policy." 72 if self.error: ---> 73 raise ValueError(error_str) 74 else: 75 return error_str ValueError: Text was found that violates OpenAI's content policy.How to create a custom Moderation chain‚ÄãHere's an example of creating a custom moderation chain with a custom error message.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: violates OpenAI's content policy." 72 if self.error: ---> 73 raise ValueError(error_str) 74 else: 75 return error_str ValueError: Text was found that violates OpenAI's content policy.How to create a custom Moderation chain‚ÄãHere's an example of creating a custom moderation chain with a custom error message.
3,991
It requires some knowledge of OpenAI's moderation endpoint results. See docs here.class CustomModeration(OpenAIModerationChain): def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = f"The following text was found that violates OpenAI's content policy: {text}" return error_str return textcustom_moderation = CustomModeration()custom_moderation.run("This is okay") 'This is okay'custom_moderation.run("I will kill you") "The following text was found that violates OpenAI's content policy: I will kill you"How to append a Moderation chain to an LLMChain‚ÄãTo easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction.Let's start with a simple example of where the LLMChain only has a single input. For this purpose,
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: It requires some knowledge of OpenAI's moderation endpoint results. See docs here.class CustomModeration(OpenAIModerationChain): def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = f"The following text was found that violates OpenAI's content policy: {text}" return error_str return textcustom_moderation = CustomModeration()custom_moderation.run("This is okay") 'This is okay'custom_moderation.run("I will kill you") "The following text was found that violates OpenAI's content policy: I will kill you"How to append a Moderation chain to an LLMChain‚ÄãTo easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction.Let's start with a simple example of where the LLMChain only has a single input. For this purpose,
3,992
we will prompt the model, so it says something harmful.prompt = PromptTemplate(template="{text}", input_variables=["text"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)text = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1: I will kill youPerson 2:"""llm_chain.run(text) ' I will kill you'chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])chain.run(text) "Text was found that violates OpenAI's content policy."Now let's walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can't use the SimpleSequentialChain)prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)setup = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1:"""new_input = "I will kill you"inputs = {"setup": setup, "new_input": new_input}llm_chain(inputs, return_only_outputs=True) {'text': ' I will kill you'}# Setting the input/output keys so it lines upmoderation_chain.input_key = "text"moderation_chain.output_key = "sanitized_text"chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])chain(inputs, return_only_outputs=True) {'sanitized_text': "Text was found that violates OpenAI's content policy."}PreviousLogical Fallacy chainNextMoreHow to use the moderation chainHow to create a custom Moderation chainHow to append a Moderation chain to an LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. ->: we will prompt the model, so it says something harmful.prompt = PromptTemplate(template="{text}", input_variables=["text"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)text = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1: I will kill youPerson 2:"""llm_chain.run(text) ' I will kill you'chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])chain.run(text) "Text was found that violates OpenAI's content policy."Now let's walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can't use the SimpleSequentialChain)prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)setup = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1:"""new_input = "I will kill you"inputs = {"setup": setup, "new_input": new_input}llm_chain(inputs, return_only_outputs=True) {'text': ' I will kill you'}# Setting the input/output keys so it lines upmoderation_chain.input_key = "text"moderation_chain.output_key = "sanitized_text"chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])chain(inputs, return_only_outputs=True) {'sanitized_text': "Text was found that violates OpenAI's content policy."}PreviousLogical Fallacy chainNextMoreHow to use the moderation chainHow to create a custom Moderation chainHow to append a Moderation chain to an LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,993
Pydantic compatibility | 🦜️🔗 Langchain
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/) ->: Pydantic compatibility | 🦜️🔗 Langchain
3,994
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyPydantic compatibilitySafetyMoreGuidesPydantic compatibilityOn this pagePydantic compatibilityPydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same timeLangChain Pydantic migration plan​As of langchain>=0.0.267, LangChain will allow users to install either Pydantic V1 or V2. Internally LangChain will continue to use V1.During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/) ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyPydantic compatibilitySafetyMoreGuidesPydantic compatibilityOn this pagePydantic compatibilityPydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same timeLangChain Pydantic migration plan​As of langchain>=0.0.267, LangChain will allow users to install either Pydantic V1 or V2. Internally LangChain will continue to use V1.During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
3,995
the case of inheritance and in the case of passing objects to LangChain.Example 1: Extending via inheritanceYES from pydantic.v1 import root_validator, validatorclass CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errorsNO from pydantic import Field, field_validator # pydantic v2class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @field_validator('x') # v2 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)Example 2: Passing objects to LangChainYESfrom langchain.tools.base import Toolfrom pydantic.v1 import BaseModel, Field # <-- Uses v1 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)NOfrom langchain.tools.base import Toolfrom pydantic import BaseModel, Field # <-- Uses v2 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)PreviousQA with private data protectionNextSafetyLangChain Pydantic migration planCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/) ->: the case of inheritance and in the case of passing objects to LangChain.Example 1: Extending via inheritanceYES from pydantic.v1 import root_validator, validatorclass CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errorsNO from pydantic import Field, field_validator # pydantic v2class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @field_validator('x') # v2 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1,)Example 2: Passing objects to LangChainYESfrom langchain.tools.base import Toolfrom pydantic.v1 import BaseModel, Field # <-- Uses v1 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)NOfrom langchain.tools.base import Toolfrom pydantic import BaseModel, Field # <-- Uses v2 namespaceclass CalculatorInput(BaseModel): question: str = Field()Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput)PreviousQA with private data protectionNextSafetyLangChain Pydantic migration planCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
3,996
QA with private data protection | 🦜️🔗 Langchain
Open In Colab
Open In Colab ->: QA with private data protection | 🦜️🔗 Langchain
3,997
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyData anonymization with Microsoft PresidioReversible anonymizationMulti-language anonymizationQA with private data protectionPydantic compatibilitySafetyMoreGuidesPrivacyData anonymization with Microsoft PresidioQA with private data protectionOn this pageQA with private data protectionQA with private data protectionIn this notebook, we will look at building a basic system for question answering, based on private data. Before feeding the LLM with this data, we need to protect it so that it doesn't go to an external API (e.g. OpenAI, Anthropic). Then, after receiving the model output, we would like the data to be restored to its original form. Below you can observe an example flow of this QA system:In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit this part of the documentation.Quickstart​Iterative process of upgrading the anonymizer​# Install necessary packages# !pip install langchain langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker faiss-cpu tiktoken# ! python -m spacy download en_core_web_lgdocument_content = """Date: October 19, 2021 Witness: John Doe Subject: Testimony Regarding the Loss of Wallet Testimony Content: Hello Officer, My name is John Doe and on October 19, 2021, my wallet was stolen in the vicinity of Kilmarnock during a bike trip. This wallet contains some very important things to me. Firstly, the wallet contains my credit card with number 4111 1111 1111 1111, which is registered under my name and linked to my
Open In Colab
Open In Colab ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesAdaptersDebuggingDeploymentEvaluationFallbacksLangSmithRun LLMs locallyModel comparisonPrivacyData anonymization with Microsoft PresidioReversible anonymizationMulti-language anonymizationQA with private data protectionPydantic compatibilitySafetyMoreGuidesPrivacyData anonymization with Microsoft PresidioQA with private data protectionOn this pageQA with private data protectionQA with private data protectionIn this notebook, we will look at building a basic system for question answering, based on private data. Before feeding the LLM with this data, we need to protect it so that it doesn't go to an external API (e.g. OpenAI, Anthropic). Then, after receiving the model output, we would like the data to be restored to its original form. Below you can observe an example flow of this QA system:In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit this part of the documentation.Quickstart​Iterative process of upgrading the anonymizer​# Install necessary packages# !pip install langchain langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker faiss-cpu tiktoken# ! python -m spacy download en_core_web_lgdocument_content = """Date: October 19, 2021 Witness: John Doe Subject: Testimony Regarding the Loss of Wallet Testimony Content: Hello Officer, My name is John Doe and on October 19, 2021, my wallet was stolen in the vicinity of Kilmarnock during a bike trip. This wallet contains some very important things to me. Firstly, the wallet contains my credit card with number 4111 1111 1111 1111, which is registered under my name and linked to my
3,998
is registered under my name and linked to my bank account, PL61109010140000071219812874. Additionally, the wallet had a driver's license - DL No: 999000680 issued to my name. It also houses my Social Security Number, 602-76-4532. What's more, I had my polish identity card there, with the number ABC123456. I would like this data to be secured and protected in all possible ways. I believe It was stolen at 9:30 AM. In case any information arises regarding my wallet, please reach out to me on my phone number, 999-888-7777, or through my personal email, [email protected]. Please consider this information to be highly confidential and respect my privacy. The bank has been informed about the stolen credit card and necessary actions have been taken from their end. They will be reachable at their official email, [email protected]. My representative there is Victoria Cherry (her business phone: 987-654-3210). Thank you for your assistance, John Doe"""from langchain.schema import Documentdocuments = [Document(page_content=document_content)]We only have one document, so before we move on to creating a QA system, let's focus on its content to begin with.You may observe that the text contains many different PII values, some types occur repeatedly (names, phone numbers, emails), and some specific PIIs are repeated (John Doe).# Util function for coloring the PII markers# NOTE: It will not be visible on documentation page, only in the notebookimport redef print_colored_pii(string): colored_string = re.sub( r"(<[^>]*>)", lambda m: "\033[31m" + m.group(1) + "\033[0m", string ) print(colored_string)Let's proceed and try to anonymize the text with the default settings. For now, we don't replace the data with synthetic, we just mark it with markers (e.g. <PERSON>), so we set add_default_faker_operators=False:from langchain_experimental.data_anonymizer import PresidioReversibleAnonymizeranonymizer = PresidioReversibleAnonymizer(
Open In Colab
Open In Colab ->: is registered under my name and linked to my bank account, PL61109010140000071219812874. Additionally, the wallet had a driver's license - DL No: 999000680 issued to my name. It also houses my Social Security Number, 602-76-4532. What's more, I had my polish identity card there, with the number ABC123456. I would like this data to be secured and protected in all possible ways. I believe It was stolen at 9:30 AM. In case any information arises regarding my wallet, please reach out to me on my phone number, 999-888-7777, or through my personal email, [email protected]. Please consider this information to be highly confidential and respect my privacy. The bank has been informed about the stolen credit card and necessary actions have been taken from their end. They will be reachable at their official email, [email protected]. My representative there is Victoria Cherry (her business phone: 987-654-3210). Thank you for your assistance, John Doe"""from langchain.schema import Documentdocuments = [Document(page_content=document_content)]We only have one document, so before we move on to creating a QA system, let's focus on its content to begin with.You may observe that the text contains many different PII values, some types occur repeatedly (names, phone numbers, emails), and some specific PIIs are repeated (John Doe).# Util function for coloring the PII markers# NOTE: It will not be visible on documentation page, only in the notebookimport redef print_colored_pii(string): colored_string = re.sub( r"(<[^>]*>)", lambda m: "\033[31m" + m.group(1) + "\033[0m", string ) print(colored_string)Let's proceed and try to anonymize the text with the default settings. For now, we don't replace the data with synthetic, we just mark it with markers (e.g. <PERSON>), so we set add_default_faker_operators=False:from langchain_experimental.data_anonymizer import PresidioReversibleAnonymizeranonymizer = PresidioReversibleAnonymizer(
3,999
= PresidioReversibleAnonymizer( add_default_faker_operators=False,)print_colored_pii(anonymizer.anonymize(document_content)) Date: <DATE_TIME> Witness: <PERSON> Subject: Testimony Regarding the Loss of Wallet Testimony Content: Hello Officer, My name is <PERSON> and on <DATE_TIME>, my wallet was stolen in the vicinity of <LOCATION> during a bike trip. This wallet contains some very important things to me. Firstly, the wallet contains my credit card with number <CREDIT_CARD>, which is registered under my name and linked to my bank account, <IBAN_CODE>. Additionally, the wallet had a driver's license - DL No: <US_DRIVER_LICENSE> issued to my name. It also houses my Social Security Number, <US_SSN>. What's more, I had my polish identity card there, with the number ABC123456. I would like this data to be secured and protected in all possible ways. I believe It was stolen at <DATE_TIME_2>. In case any information arises regarding my wallet, please reach out to me on my phone number, <PHONE_NUMBER>, or through my personal email, <EMAIL_ADDRESS>. Please consider this information to be highly confidential and respect my privacy. The bank has been informed about the stolen credit card and necessary actions have been taken from their end. They will be reachable at their official email, <EMAIL_ADDRESS_2>. My representative there is <PERSON_2> (her business phone: <UK_NHS>). Thank you for your assistance, <PERSON>Let's also look at the mapping between original and anonymized values:import pprintpprint.pprint(anonymizer.deanonymizer_mapping) {'CREDIT_CARD': {'<CREDIT_CARD>': '4111 1111 1111 1111'}, 'DATE_TIME': {'<DATE_TIME>': 'October 19, 2021', '<DATE_TIME_2>': '9:30 AM'}, 'EMAIL_ADDRESS': {'<EMAIL_ADDRESS>': '[email protected]', '<EMAIL_ADDRESS_2>': '[email protected]'}, 'IBAN_CODE': {'<IBAN_CODE>': 'PL61109010140000071219812874'},
Open In Colab
Open In Colab ->: = PresidioReversibleAnonymizer( add_default_faker_operators=False,)print_colored_pii(anonymizer.anonymize(document_content)) Date: <DATE_TIME> Witness: <PERSON> Subject: Testimony Regarding the Loss of Wallet Testimony Content: Hello Officer, My name is <PERSON> and on <DATE_TIME>, my wallet was stolen in the vicinity of <LOCATION> during a bike trip. This wallet contains some very important things to me. Firstly, the wallet contains my credit card with number <CREDIT_CARD>, which is registered under my name and linked to my bank account, <IBAN_CODE>. Additionally, the wallet had a driver's license - DL No: <US_DRIVER_LICENSE> issued to my name. It also houses my Social Security Number, <US_SSN>. What's more, I had my polish identity card there, with the number ABC123456. I would like this data to be secured and protected in all possible ways. I believe It was stolen at <DATE_TIME_2>. In case any information arises regarding my wallet, please reach out to me on my phone number, <PHONE_NUMBER>, or through my personal email, <EMAIL_ADDRESS>. Please consider this information to be highly confidential and respect my privacy. The bank has been informed about the stolen credit card and necessary actions have been taken from their end. They will be reachable at their official email, <EMAIL_ADDRESS_2>. My representative there is <PERSON_2> (her business phone: <UK_NHS>). Thank you for your assistance, <PERSON>Let's also look at the mapping between original and anonymized values:import pprintpprint.pprint(anonymizer.deanonymizer_mapping) {'CREDIT_CARD': {'<CREDIT_CARD>': '4111 1111 1111 1111'}, 'DATE_TIME': {'<DATE_TIME>': 'October 19, 2021', '<DATE_TIME_2>': '9:30 AM'}, 'EMAIL_ADDRESS': {'<EMAIL_ADDRESS>': '[email protected]', '<EMAIL_ADDRESS_2>': '[email protected]'}, 'IBAN_CODE': {'<IBAN_CODE>': 'PL61109010140000071219812874'},