Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
1,500
Bittensor | 🦜️🔗 Langchain
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. ->: Bittensor | 🦜️🔗 Langchain
1,501
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBittensorOn this pageBittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.NIBittensorLLM is developed by Neural Internet, powered by Bittensor.This LLM showcases true potential of decentralized AI by giving you the best response(s) from the Bittensor protocol, which consist of various AI models such as OpenAI, LLaMA2 etc.Users can view their logs, requests, and API keys on the Validator Endpoint Frontend. However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.If you encounter any difficulties or have any questions, please feel free to reach out to our developer on GitHub, Discord or join our discord server for latest update and queries Neural Internet.Different Parameter and response handling for NIBittensorLLM​from langchain.llms import NIBittensorLLMimport jsonfrom
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBittensorOn this pageBittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.NIBittensorLLM is developed by Neural Internet, powered by Bittensor.This LLM showcases true potential of decentralized AI by giving you the best response(s) from the Bittensor protocol, which consist of various AI models such as OpenAI, LLaMA2 etc.Users can view their logs, requests, and API keys on the Validator Endpoint Frontend. However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.If you encounter any difficulties or have any questions, please feel free to reach out to our developer on GitHub, Discord or join our discord server for latest update and queries Neural Internet.Different Parameter and response handling for NIBittensorLLM​from langchain.llms import NIBittensorLLMimport jsonfrom
1,502
import NIBittensorLLMimport jsonfrom pprint import pprintfrom langchain.globals import set_debugset_debug(True)# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm_sys = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project")sys_resp = llm_sys( "What is bittensor and What are the potential benefits of decentralized AI?")print(f"Response provided by LLM with system prompt set is : {sys_resp}")# The top_responses parameter can give multiple responses based on its parameter value# This below code retrive top 10 miner's response all the response are in format of json# Json response structure is""" { "choices": [ {"index": Bittensor's Metagraph index number, "uid": Unique Identifier of a miner, "responder_hotkey": Hotkey of a miner, "message":{"role":"assistant","content": Contains actual response}, "response_ms": Time in millisecond required to fetch response from a miner} ] } """multi_response_llm = NIBittensorLLM(top_responses=10)multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")json_multi_resp = json.loads(multi_resp)pprint(json_multi_resp)Using NIBittensorLLM with LLMChain and PromptTemplate‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import NIBittensorLLMfrom langchain.globals import set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt.")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. ->: import NIBittensorLLMimport jsonfrom pprint import pprintfrom langchain.globals import set_debugset_debug(True)# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm_sys = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project")sys_resp = llm_sys( "What is bittensor and What are the potential benefits of decentralized AI?")print(f"Response provided by LLM with system prompt set is : {sys_resp}")# The top_responses parameter can give multiple responses based on its parameter value# This below code retrive top 10 miner's response all the response are in format of json# Json response structure is""" { "choices": [ {"index": Bittensor's Metagraph index number, "uid": Unique Identifier of a miner, "responder_hotkey": Hotkey of a miner, "message":{"role":"assistant","content": Contains actual response}, "response_ms": Time in millisecond required to fetch response from a miner} ] } """multi_response_llm = NIBittensorLLM(top_responses=10)multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")json_multi_resp = json.loads(multi_resp)pprint(json_multi_resp)Using NIBittensorLLM with LLMChain and PromptTemplate‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import NIBittensorLLMfrom langchain.globals import set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt.")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is
1,503
llm=llm)question = "What is bittensor?"llm_chain.run(question)Using NIBittensorLLM with Conversational Agent and Google Search Tool​from langchain.agents import ( AgentType, initialize_agent, load_tools, ZeroShotAgent, Tool, AgentExecutor,)from langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.utilities import GoogleSearchAPIWrapper, SerpAPIWrapperfrom langchain.llms import NIBittensorLLMmemory = ConversationBufferMemory(memory_key="chat_history")prefix = """Answer prompt based on LLM if there is need to search something then use internet and observe internet result and give accurate reply of user questions also try to use authenticated sources"""suffix = """Begin! {chat_history} Question: {input} {agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)llm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt")llm_chain = LLMChain(llm=llm, prompt=prompt)memory = ConversationBufferMemory(memory_key="chat_history")agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)response = agent_chain.run(input=prompt)PreviousBedrockNextCerebriumAIDifferent Parameter and response handling for NIBittensorLLMUsing NIBittensorLLM with LLMChain and PromptTemplateUsing NIBittensorLLM with Conversational Agent and Google Search ToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.
Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. ->: llm=llm)question = "What is bittensor?"llm_chain.run(question)Using NIBittensorLLM with Conversational Agent and Google Search Tool​from langchain.agents import ( AgentType, initialize_agent, load_tools, ZeroShotAgent, Tool, AgentExecutor,)from langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.utilities import GoogleSearchAPIWrapper, SerpAPIWrapperfrom langchain.llms import NIBittensorLLMmemory = ConversationBufferMemory(memory_key="chat_history")prefix = """Answer prompt based on LLM if there is need to search something then use internet and observe internet result and give accurate reply of user questions also try to use authenticated sources"""suffix = """Begin! {chat_history} Question: {input} {agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)llm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt")llm_chain = LLMChain(llm=llm, prompt=prompt)memory = ConversationBufferMemory(memory_key="chat_history")agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)response = agent_chain.run(input=prompt)PreviousBedrockNextCerebriumAIDifferent Parameter and response handling for NIBittensorLLMUsing NIBittensorLLM with LLMChain and PromptTemplateUsing NIBittensorLLM with Conversational Agent and Google Search ToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,504
OctoAI | 🦜️🔗 Langchain
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications. ->: OctoAI | 🦜️🔗 Langchain
1,505
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOctoAIOn this pageOctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.This example goes over how to use LangChain to interact with OctoAI LLM endpointsSetup​To run our example app, there are four simple steps to take:Clone the MPT-7B demo template to your OctoAI account by visiting https://octoai.cloud/templates/mpt-7b-demo then clicking "Clone Template." If you want to use a different LLM model, you can also containerize the model and make a custom OctoAI endpoint yourself, by following Build a Container from Python and Create a Custom Endpoint from a ContainerPaste your Endpoint URL in the code cell belowGet an API Token from your OctoAI account page.Paste your API key in in the code cell belowimport osos.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] =
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOctoAIOn this pageOctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.This example goes over how to use LangChain to interact with OctoAI LLM endpointsSetup​To run our example app, there are four simple steps to take:Clone the MPT-7B demo template to your OctoAI account by visiting https://octoai.cloud/templates/mpt-7b-demo then clicking "Clone Template." If you want to use a different LLM model, you can also containerize the model and make a custom OctoAI endpoint yourself, by following Build a Container from Python and Create a Custom Endpoint from a ContainerPaste your Endpoint URL in the code cell belowGet an API Token from your OctoAI account page.Paste your API key in in the code cell belowimport osos.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] =
1,506
= "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] = "https://mpt-7b-demo-f1kzsig6xes9.octoai.run/generate"from langchain.llms.octoai_endpoint import OctoAIEndpointfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainExample​template = """Below is an instruction that describes a task. Write a response that appropriately completes the request.\n Instruction:\n{question}\n Response: """prompt = PromptTemplate(template=template, input_variables=["question"])llm = OctoAIEndpoint( model_kwargs={ "max_new_tokens": 200, "temperature": 0.75, "top_p": 0.95, "repetition_penalty": 1, "seed": None, "stop": [], },)question = "Who was leonardo davinci?"llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(question) '\nLeonardo da Vinci was an Italian polymath and painter regarded by many as one of the greatest painters of all time. He is best known for his masterpieces including Mona Lisa, The Last Supper, and The Virgin of the Rocks. He was a draftsman, sculptor, architect, and one of the most important figures in the history of science. Da Vinci flew gliders, experimented with water turbines and windmills, and invented the catapult and a joystick-type human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and scientist.'PreviousNLP CloudNextOllamaSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.
OctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications. ->: = "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] = "https://mpt-7b-demo-f1kzsig6xes9.octoai.run/generate"from langchain.llms.octoai_endpoint import OctoAIEndpointfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainExample​template = """Below is an instruction that describes a task. Write a response that appropriately completes the request.\n Instruction:\n{question}\n Response: """prompt = PromptTemplate(template=template, input_variables=["question"])llm = OctoAIEndpoint( model_kwargs={ "max_new_tokens": 200, "temperature": 0.75, "top_p": 0.95, "repetition_penalty": 1, "seed": None, "stop": [], },)question = "Who was leonardo davinci?"llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(question) '\nLeonardo da Vinci was an Italian polymath and painter regarded by many as one of the greatest painters of all time. He is best known for his masterpieces including Mona Lisa, The Last Supper, and The Virgin of the Rocks. He was a draftsman, sculptor, architect, and one of the most important figures in the history of science. Da Vinci flew gliders, experimented with water turbines and windmills, and invented the catapult and a joystick-type human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and scientist.'PreviousNLP CloudNextOllamaSetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,507
OpaquePrompts | 🦜️🔗 Langchain
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: OpaquePrompts | 🦜️🔗 Langchain
1,508
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpaquePromptsOpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.This notebook goes over how to use LangChain to interact with OpaquePrompts.# install the opaqueprompts and langchain packages pip install opaqueprompts langchainAccessing the OpaquePrompts API requires an API key, which you can get by creating an account on the OpaquePrompts website. Once you have an account, you can find your API key on the API Keys page.import os# Set API keysos.environ['OPAQUEPROMPTS_API_KEY'] =
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpaquePromptsOpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.This notebook goes over how to use LangChain to interact with OpaquePrompts.# install the opaqueprompts and langchain packages pip install opaqueprompts langchainAccessing the OpaquePrompts API requires an API key, which you can get by creating an account on the OpaquePrompts website. Once you have an account, you can find your API key on the API Keys page.import os# Set API keysos.environ['OPAQUEPROMPTS_API_KEY'] =
1,509
Set API keysos.environ['OPAQUEPROMPTS_API_KEY'] = "<OPAQUEPROMPTS_API_KEY>"os.environ['OPENAI_API_KEY'] = "<OPENAI_API_KEY>"Use OpaquePrompts LLM WrapperApplying OpaquePrompts to your application could be as simple as wrapping your LLM using the OpaquePrompts class by replace llm=OpenAI() with llm=OpaquePrompts(base_llm=OpenAI()).from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.llms import OpaquePromptsfrom langchain.globals import set_debug, set_verboseset_debug(True)set_verbose(True)prompt_template = """As an AI assistant, you will answer questions according to given context.Sensitive personal information in the question is masked for privacy.For instance, if the original text says "Giana is good," it will be changedto "PERSON_998 is good." Here's how to handle these changes:* Consider these masked phrases just as placeholders, but still refer tothem in a relevant way when answering.* It's possible that different masked terms might mean the same thing.Stick with the given term and don't modify it.* All masked terms follow the "TYPE_ID" pattern.* Please don't invent new masked terms. For instance, if you see "PERSON_998,"don't come up with "PERSON_997" or "PERSON_999" unless they're already in the question.Conversation History: ```{history}```Context : ```During our recent meeting on February 23, 2023, at 10:30 AM,John Doe provided me with his personal details. His email is [email protected] his contact number is 650-456-7890. He lives in New York City, USA, andbelongs to the American nationality with Christian beliefs and a leaning towardsthe Democratic party. He mentioned that he recently made a transaction using hiscredit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: Set API keysos.environ['OPAQUEPROMPTS_API_KEY'] = "<OPAQUEPROMPTS_API_KEY>"os.environ['OPENAI_API_KEY'] = "<OPENAI_API_KEY>"Use OpaquePrompts LLM WrapperApplying OpaquePrompts to your application could be as simple as wrapping your LLM using the OpaquePrompts class by replace llm=OpenAI() with llm=OpaquePrompts(base_llm=OpenAI()).from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.llms import OpaquePromptsfrom langchain.globals import set_debug, set_verboseset_debug(True)set_verbose(True)prompt_template = """As an AI assistant, you will answer questions according to given context.Sensitive personal information in the question is masked for privacy.For instance, if the original text says "Giana is good," it will be changedto "PERSON_998 is good." Here's how to handle these changes:* Consider these masked phrases just as placeholders, but still refer tothem in a relevant way when answering.* It's possible that different masked terms might mean the same thing.Stick with the given term and don't modify it.* All masked terms follow the "TYPE_ID" pattern.* Please don't invent new masked terms. For instance, if you see "PERSON_998,"don't come up with "PERSON_997" or "PERSON_999" unless they're already in the question.Conversation History: ```{history}```Context : ```During our recent meeting on February 23, 2023, at 10:30 AM,John Doe provided me with his personal details. His email is [email protected] his contact number is 650-456-7890. He lives in New York City, USA, andbelongs to the American nationality with Christian beliefs and a leaning towardsthe Democratic party. He mentioned that he recently made a transaction using hiscredit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European
1,510
While discussing his European travels, he noteddown his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his websiteas https://johndoeportfolio.com. John also discussed some of his US-specific details.He said his bank account number is 1234567890123456 and his drivers license is Y12345678.His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is123456789. He emphasized not to share his SSN, which is 123-45-6789. Furthermore, hementioned that he accesses his work files remotely through the IP 192.168.1.1 and hasa medical license number MED-123456. ```Question: ```{question}```"""chain = LLMChain( prompt=PromptTemplate.from_template(prompt_template), llm=OpaquePrompts(base_llm=OpenAI()), memory=ConversationBufferWindowMemory(k=2), verbose=True,)print( chain.run( {"question": """Write a message to remind John to do password reset for his website to stay secure."""}, callbacks=[StdOutCallbackHandler()], ))From the output, you can see the following context from user input has sensitive data.# Context from user inputDuring our recent meeting on February 23, 2023, at 10:30 AM, John Doe provided me with his personal details. His email is [email protected] and his contact number is 650-456-7890. He lives in New York City, USA, and belongs to the American nationality with Christian beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European travels, he noted down his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his website as https://johndoeportfolio.com. John also discussed some of his US-specific details. He said his bank account number is 1234567890123456 and his drivers license is Y12345678. His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: While discussing his European travels, he noteddown his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his websiteas https://johndoeportfolio.com. John also discussed some of his US-specific details.He said his bank account number is 1234567890123456 and his drivers license is Y12345678.His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is123456789. He emphasized not to share his SSN, which is 123-45-6789. Furthermore, hementioned that he accesses his work files remotely through the IP 192.168.1.1 and hasa medical license number MED-123456. ```Question: ```{question}```"""chain = LLMChain( prompt=PromptTemplate.from_template(prompt_template), llm=OpaquePrompts(base_llm=OpenAI()), memory=ConversationBufferWindowMemory(k=2), verbose=True,)print( chain.run( {"question": """Write a message to remind John to do password reset for his website to stay secure."""}, callbacks=[StdOutCallbackHandler()], ))From the output, you can see the following context from user input has sensitive data.# Context from user inputDuring our recent meeting on February 23, 2023, at 10:30 AM, John Doe provided me with his personal details. His email is [email protected] and his contact number is 650-456-7890. He lives in New York City, USA, and belongs to the American nationality with Christian beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European travels, he noted down his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his website as https://johndoeportfolio.com. John also discussed some of his US-specific details. He said his bank account number is 1234567890123456 and his drivers license is Y12345678. His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is
1,511
renewed his passport, the number for which is 123456789. He emphasized not to share his SSN, which is 669-45-6789. Furthermore, he mentioned that he accesses his work files remotely through the IP 192.168.1.1 and has a medical license number MED-123456.OpaquePrompts will automatically detect the sensitive data and replace it with a placeholder. # Context after OpaquePromptsDuring our recent meeting on DATE_TIME_3, at DATE_TIME_2, PERSON_3 provided me with his personal details. His email is EMAIL_ADDRESS_1 and his contact number is PHONE_NUMBER_1. He lives in LOCATION_3, LOCATION_2, and belongs to the NRP_3 nationality with NRP_2 beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card CREDIT_CARD_1 and transferred bitcoins to the wallet address CRYPTO_1. While discussing his NRP_1 travels, he noted down his IBAN as IBAN_CODE_1. Additionally, he provided his website as URL_1. PERSON_2 also discussed some of his LOCATION_1-specific details. He said his bank account number is US_BANK_NUMBER_1 and his drivers license is US_DRIVER_LICENSE_2. His ITIN is US_ITIN_1, and he recently renewed his passport, the number for which is DATE_TIME_1. He emphasized not to share his SSN, which is US_SSN_1. Furthermore, he mentioned that he accesses his work files remotely through the IP IP_ADDRESS_1 and has a medical license number MED-US_DRIVER_LICENSE_1.Placeholder is used in the LLM response.# response returned by LLMHey PERSON_1, just wanted to remind you to do a password reset for your website URL_1 through your email EMAIL_ADDRESS_1. It's important to stay secure online, so don't forget to do it!Response is desanitized by replacing the placeholder with the original sensitive data.# desanitized LLM response from OpaquePromptsHey John, just wanted to remind you to do a password reset for your website https://johndoeportfolio.com through your email [email protected]. It's important to stay secure online, so don't
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: renewed his passport, the number for which is 123456789. He emphasized not to share his SSN, which is 669-45-6789. Furthermore, he mentioned that he accesses his work files remotely through the IP 192.168.1.1 and has a medical license number MED-123456.OpaquePrompts will automatically detect the sensitive data and replace it with a placeholder. # Context after OpaquePromptsDuring our recent meeting on DATE_TIME_3, at DATE_TIME_2, PERSON_3 provided me with his personal details. His email is EMAIL_ADDRESS_1 and his contact number is PHONE_NUMBER_1. He lives in LOCATION_3, LOCATION_2, and belongs to the NRP_3 nationality with NRP_2 beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card CREDIT_CARD_1 and transferred bitcoins to the wallet address CRYPTO_1. While discussing his NRP_1 travels, he noted down his IBAN as IBAN_CODE_1. Additionally, he provided his website as URL_1. PERSON_2 also discussed some of his LOCATION_1-specific details. He said his bank account number is US_BANK_NUMBER_1 and his drivers license is US_DRIVER_LICENSE_2. His ITIN is US_ITIN_1, and he recently renewed his passport, the number for which is DATE_TIME_1. He emphasized not to share his SSN, which is US_SSN_1. Furthermore, he mentioned that he accesses his work files remotely through the IP IP_ADDRESS_1 and has a medical license number MED-US_DRIVER_LICENSE_1.Placeholder is used in the LLM response.# response returned by LLMHey PERSON_1, just wanted to remind you to do a password reset for your website URL_1 through your email EMAIL_ADDRESS_1. It's important to stay secure online, so don't forget to do it!Response is desanitized by replacing the placeholder with the original sensitive data.# desanitized LLM response from OpaquePromptsHey John, just wanted to remind you to do a password reset for your website https://johndoeportfolio.com through your email [email protected]. It's important to stay secure online, so don't
1,512
It's important to stay secure online, so don't forget to do it!Use OpaquePrompts in LangChain expressionThere are functions that can be used with LangChain expression as well if a drop-in replacement doesn't offer the flexibility you need. import langchain.utilities.opaqueprompts as opfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.schema.output_parser import StrOutputParserprompt=PromptTemplate.from_template(prompt_template), llm = OpenAI()pg_chain = ( op.sanitize | RunnablePassthrough.assign( response=(lambda x: x["sanitized_input"]) | prompt | llm | StrOutputParser(), ) | (lambda x: op.desanitize(x["response"], x["secure_context"])))pg_chain.invoke({"question": "Write a text message to remind John to do password reset for his website through his email to stay secure.", "history": ""})PreviousOllamaNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.
OpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting. ->: It's important to stay secure online, so don't forget to do it!Use OpaquePrompts in LangChain expressionThere are functions that can be used with LangChain expression as well if a drop-in replacement doesn't offer the flexibility you need. import langchain.utilities.opaqueprompts as opfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.schema.output_parser import StrOutputParserprompt=PromptTemplate.from_template(prompt_template), llm = OpenAI()pg_chain = ( op.sanitize | RunnablePassthrough.assign( response=(lambda x: x["sanitized_input"]) | prompt | llm | StrOutputParser(), ) | (lambda x: op.desanitize(x["response"], x["secure_context"])))pg_chain.invoke({"question": "Write a text message to remind John to do password reset for his website through his email to stay secure.", "history": ""})PreviousOllamaNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,513
Fireworks | 🦜️🔗 Langchain
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: Fireworks | 🦜️🔗 Langchain
1,514
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsFireworksFireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with Fireworks models.from langchain.llms.fireworks import Fireworksfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)import osSetupMake sure the fireworks-ai package is installed in your environment.Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable.Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on app.fireworks.ai.import osimport getpassif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] =
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsFireworksFireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with Fireworks models.from langchain.llms.fireworks import Fireworksfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)import osSetupMake sure the fireworks-ai package is installed in your environment.Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable.Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on app.fireworks.ai.import osimport getpassif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] =
1,515
os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks modelllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b")Calling the Model DirectlyYou can call the model directly with string prompts to get completions.# Single promptoutput = llm("Who's the best quarterback in the NFL?")print(output) Is it Tom Brady? Peyton Manning? Aaron Rodgers? Or maybe even Andrew Luck? Well, let's look at some stats to decide. First, let's talk about touchdowns. Who's thrown the most touchdowns this season? (pause for dramatic effect) It's... Aaron Rodgers! With 28 touchdowns, he's leading the league in that category. But what about interceptions? Who's thrown the fewest picks? (drumroll) It's... Tom Brady! With only 4 interceptions, he's got the fewest picks in the league. Now, let's talk about passer rating. Who's got the highest passer rating this season? (pause for suspense) It's... Peyton Manning! With a rating of 114.2, he's been lights out this season. But what about wins? Who's got the most wins this season? (drumroll) It's... Andrew Luck! With 8 wins, he's got the most victories this season. So, there you have it folks. According to these stats, the best quarterback in the NFL this season is... (drumroll) Aaron Rodgers! But wait, there's more! Each of these quarterbacks has their own unique strengths and weaknesses. Tom Brady is a master of the short pass, but can struggle with deep balls. Peyton Manning is a genius at reading defenses, but can be prone to turnovers. Aaron Rodgers has a cannon for an arm, but can be inconsistent at times. Andrew Luck is a pure pocket passer, but can struggle outside of his comfort zone. So, who's the best quarterback in the NFL? It's a tough call, but one thing's for sure: each of these quarterbacks is an elite talent, and they'll continue to light up
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks modelllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b")Calling the Model DirectlyYou can call the model directly with string prompts to get completions.# Single promptoutput = llm("Who's the best quarterback in the NFL?")print(output) Is it Tom Brady? Peyton Manning? Aaron Rodgers? Or maybe even Andrew Luck? Well, let's look at some stats to decide. First, let's talk about touchdowns. Who's thrown the most touchdowns this season? (pause for dramatic effect) It's... Aaron Rodgers! With 28 touchdowns, he's leading the league in that category. But what about interceptions? Who's thrown the fewest picks? (drumroll) It's... Tom Brady! With only 4 interceptions, he's got the fewest picks in the league. Now, let's talk about passer rating. Who's got the highest passer rating this season? (pause for suspense) It's... Peyton Manning! With a rating of 114.2, he's been lights out this season. But what about wins? Who's got the most wins this season? (drumroll) It's... Andrew Luck! With 8 wins, he's got the most victories this season. So, there you have it folks. According to these stats, the best quarterback in the NFL this season is... (drumroll) Aaron Rodgers! But wait, there's more! Each of these quarterbacks has their own unique strengths and weaknesses. Tom Brady is a master of the short pass, but can struggle with deep balls. Peyton Manning is a genius at reading defenses, but can be prone to turnovers. Aaron Rodgers has a cannon for an arm, but can be inconsistent at times. Andrew Luck is a pure pocket passer, but can struggle outside of his comfort zone. So, who's the best quarterback in the NFL? It's a tough call, but one thing's for sure: each of these quarterbacks is an elite talent, and they'll continue to light up
1,516
an elite talent, and they'll continue to light up the scoreboard for their respective teams all season long.# Calling multiple promptsoutput = llm.generate([ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?",])print(output.generations) [[Generation(text='\nasked Dec 28, 2016 in Sports by anonymous\nWho is the best cricket player in 2016?\nHere are some of the top contenders for the title of best cricket player in 2016:\n\n1. Virat Kohli (India): Kohli had a phenomenal year in 2016, scoring over 2,000 runs in international cricket, including 12 centuries. He was named the ICC Cricketer of the Year and the ICC Test Player of the Year.\n2. Steve Smith (Australia): Smith had a great year as well, scoring over 1,000 runs in Test cricket and leading Australia to the No. 1 ranking in Test cricket. He was named the ICC ODI Player of the Year.\n3. Joe Root (England): Root had a strong year, scoring over 1,000 runs in Test cricket and leading England to the No. 2 ranking in Test cricket.\n4. Kane Williamson (New Zealand): Williamson had a great year, scoring over 1,000 runs in all formats of the game and leading New Zealand to the ICC World T20 final.\n5. Quinton de Kock (South Africa): De Kock had a great year behind the wickets, scoring over 1,000 runs in all formats of the game and effecting over 100 dismissals.\n6. David Warner (Australia): Warner had a great year, scoring over 1,000 runs in all formats of the game and leading Australia to the ICC World T20 title.\n7. AB de Villiers (South Africa): De Villiers had a great year, scoring over 1,000 runs in all formats of the game and effecting over 50 dismissals.\n8. Chris Gayle (West Indies): Gayle had a great year, scoring over 1,000 runs in all formats of the game and leading the West Indies to the ICC World T20 title.\n9. Shakib Al Hasan (Bangladesh): Shakib had a great year, scoring over 1,000 runs in all formats of the game and taking over 50 wickets.\n10',
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: an elite talent, and they'll continue to light up the scoreboard for their respective teams all season long.# Calling multiple promptsoutput = llm.generate([ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?",])print(output.generations) [[Generation(text='\nasked Dec 28, 2016 in Sports by anonymous\nWho is the best cricket player in 2016?\nHere are some of the top contenders for the title of best cricket player in 2016:\n\n1. Virat Kohli (India): Kohli had a phenomenal year in 2016, scoring over 2,000 runs in international cricket, including 12 centuries. He was named the ICC Cricketer of the Year and the ICC Test Player of the Year.\n2. Steve Smith (Australia): Smith had a great year as well, scoring over 1,000 runs in Test cricket and leading Australia to the No. 1 ranking in Test cricket. He was named the ICC ODI Player of the Year.\n3. Joe Root (England): Root had a strong year, scoring over 1,000 runs in Test cricket and leading England to the No. 2 ranking in Test cricket.\n4. Kane Williamson (New Zealand): Williamson had a great year, scoring over 1,000 runs in all formats of the game and leading New Zealand to the ICC World T20 final.\n5. Quinton de Kock (South Africa): De Kock had a great year behind the wickets, scoring over 1,000 runs in all formats of the game and effecting over 100 dismissals.\n6. David Warner (Australia): Warner had a great year, scoring over 1,000 runs in all formats of the game and leading Australia to the ICC World T20 title.\n7. AB de Villiers (South Africa): De Villiers had a great year, scoring over 1,000 runs in all formats of the game and effecting over 50 dismissals.\n8. Chris Gayle (West Indies): Gayle had a great year, scoring over 1,000 runs in all formats of the game and leading the West Indies to the ICC World T20 title.\n9. Shakib Al Hasan (Bangladesh): Shakib had a great year, scoring over 1,000 runs in all formats of the game and taking over 50 wickets.\n10',
1,517
of the game and taking over 50 wickets.\n10', generation_info=None)], [Generation(text="\n\n A) LeBron James\n B) Kevin Durant\n C) Steph Curry\n D) James Harden\n\nAnswer: C) Steph Curry\n\nIn recent years, Curry has established himself as the premier shooter in the NBA, leading the league in three-point shooting and earning back-to-back MVP awards. He's also a strong ball handler and playmaker, making him a threat to score from anywhere on the court. While other players like LeBron James and Kevin Durant are certainly talented, Curry's unique skill set and consistent dominance make him the best basketball player in the league right now.", generation_info=None)]]# Setting additional parameters: temperature, max_tokens, top_pllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b-chat", model_kwargs={"temperature":0.7, "max_tokens":15, "top_p":1.0})print(llm("What's the weather like in Kansas City in December?")) What's the weather like in Kansas City in December? Simple Chain with Non-Chat ModelYou can use the LangChain Expression Language to create a simple chain with non-chat models.from langchain.prompts import PromptTemplatefrom langchain.llms.fireworks import Fireworksllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b", model_kwargs={"temperature":0, "max_tokens":100, "top_p":1.0})prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")chain = prompt | llmprint(chain.invoke({"topic": "bears"})) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair? You can stream the output, if you want.for token in chain.stream({"topic": "bears"}): print(token, end='', flush=True) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says,
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: of the game and taking over 50 wickets.\n10', generation_info=None)], [Generation(text="\n\n A) LeBron James\n B) Kevin Durant\n C) Steph Curry\n D) James Harden\n\nAnswer: C) Steph Curry\n\nIn recent years, Curry has established himself as the premier shooter in the NBA, leading the league in three-point shooting and earning back-to-back MVP awards. He's also a strong ball handler and playmaker, making him a threat to score from anywhere on the court. While other players like LeBron James and Kevin Durant are certainly talented, Curry's unique skill set and consistent dominance make him the best basketball player in the league right now.", generation_info=None)]]# Setting additional parameters: temperature, max_tokens, top_pllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b-chat", model_kwargs={"temperature":0.7, "max_tokens":15, "top_p":1.0})print(llm("What's the weather like in Kansas City in December?")) What's the weather like in Kansas City in December? Simple Chain with Non-Chat ModelYou can use the LangChain Expression Language to create a simple chain with non-chat models.from langchain.prompts import PromptTemplatefrom langchain.llms.fireworks import Fireworksllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b", model_kwargs={"temperature":0, "max_tokens":100, "top_p":1.0})prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")chain = prompt | llmprint(chain.invoke({"topic": "bears"})) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair? You can stream the output, if you want.for token in chain.stream({"topic": "bears"}): print(token, end='', flush=True) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says,
1,518
have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair?PreviousEden AINextForefrontAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. ->: have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair?PreviousEden AINextForefrontAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,519
Nebula (Symbl.ai) | 🦜️🔗 Langchain
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation. ->: Nebula (Symbl.ai) | 🦜️🔗 Langchain
1,520
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsNebula (Symbl.ai)Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.Nebula documentation: https://docs.symbl.ai/docs/nebula-llmThis example goes over how to use LangChain to interact with the Nebula platform.Make sure you have API Key with you. If you don't have one please request one.from langchain.llms.symblai_nebula import Nebulallm = Nebula(nebula_api_key='<your_api_key>')Use a conversation transcript and instruction to construct a prompt.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainconversation = """Sam: Good morning, team! Let's keep this standup concise. We'll go in the usual order: what you did yesterday, what you plan to do today, and any blockers. Alex, kick us off.Alex: Morning! Yesterday, I
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsNebula (Symbl.ai)Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.Nebula documentation: https://docs.symbl.ai/docs/nebula-llmThis example goes over how to use LangChain to interact with the Nebula platform.Make sure you have API Key with you. If you don't have one please request one.from langchain.llms.symblai_nebula import Nebulallm = Nebula(nebula_api_key='<your_api_key>')Use a conversation transcript and instruction to construct a prompt.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainconversation = """Sam: Good morning, team! Let's keep this standup concise. We'll go in the usual order: what you did yesterday, what you plan to do today, and any blockers. Alex, kick us off.Alex: Morning! Yesterday, I
1,521
Alex, kick us off.Alex: Morning! Yesterday, I wrapped up the UI for the user dashboard. The new charts and widgets are now responsive. I also had a sync with the design team to ensure the final touchups are in line with the brand guidelines. Today, I'll start integrating the frontend with the new API endpoints Rhea was working on. The only blocker is waiting for some final API documentation, but I guess Rhea can update on that.Rhea: Hey, all! Yep, about the API documentation - I completed the majority of the backend work for user data retrieval yesterday. The endpoints are mostly set up, but I need to do a bit more testing today. I'll finalize the API documentation by noon, so that should unblock Alex. After that, I’ll be working on optimizing the database queries for faster data fetching. No other blockers on my end.Sam: Great, thanks Rhea. Do reach out if you need any testing assistance or if there are any hitches with the database. Now, my update: Yesterday, I coordinated with the client to get clarity on some feature requirements. Today, I'll be updating our project roadmap and timelines based on their feedback. Additionally, I'll be sitting with the QA team in the afternoon for preliminary testing. Blocker: I might need both of you to be available for a quick call in case the client wants to discuss the changes live.Alex: Sounds good, Sam. Just let us know a little in advance for the call.Rhea: Agreed. We can make time for that.Sam: Perfect! Let's keep the momentum going. Reach out if there are any sudden issues or support needed. Have a productive day!Alex: You too.Rhea: Thanks, bye!"""instruction = "Identify the main objectives mentioned in this conversation."prompt = PromptTemplate.from_template("{instruction}\n{conversation}")llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(instruction=instruction, conversation=conversation)PreviousStochasticAINextTextGenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.
Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation. ->: Alex, kick us off.Alex: Morning! Yesterday, I wrapped up the UI for the user dashboard. The new charts and widgets are now responsive. I also had a sync with the design team to ensure the final touchups are in line with the brand guidelines. Today, I'll start integrating the frontend with the new API endpoints Rhea was working on. The only blocker is waiting for some final API documentation, but I guess Rhea can update on that.Rhea: Hey, all! Yep, about the API documentation - I completed the majority of the backend work for user data retrieval yesterday. The endpoints are mostly set up, but I need to do a bit more testing today. I'll finalize the API documentation by noon, so that should unblock Alex. After that, I’ll be working on optimizing the database queries for faster data fetching. No other blockers on my end.Sam: Great, thanks Rhea. Do reach out if you need any testing assistance or if there are any hitches with the database. Now, my update: Yesterday, I coordinated with the client to get clarity on some feature requirements. Today, I'll be updating our project roadmap and timelines based on their feedback. Additionally, I'll be sitting with the QA team in the afternoon for preliminary testing. Blocker: I might need both of you to be available for a quick call in case the client wants to discuss the changes live.Alex: Sounds good, Sam. Just let us know a little in advance for the call.Rhea: Agreed. We can make time for that.Sam: Perfect! Let's keep the momentum going. Reach out if there are any sudden issues or support needed. Have a productive day!Alex: You too.Rhea: Thanks, bye!"""instruction = "Identify the main objectives mentioned in this conversation."prompt = PromptTemplate.from_template("{instruction}\n{conversation}")llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(instruction=instruction, conversation=conversation)PreviousStochasticAINextTextGenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,522
Clarifai | 🦜️🔗 Langchain
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Clarifai | 🦜️🔗 Langchain
1,523
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
1,524
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.llms import Clarifaifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSetup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "openai"APP_ID = "chat-completion"MODEL_ID = "GPT-3_5-turbo"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai LLMclarifai_llm = Clarifai( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)# Create LLM chainllm_chain = LLMChain(prompt=prompt, llm=clarifai_llm)Run Chainquestion = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. So, we need to figure out the Super Bowl winner for the 1994 season. The NFL season spans two calendar years, so the Super Bowl for the 1994 season would have taken place in early 1995. \n\nThe Super Bowl in question is Super Bowl XXIX, which was played on January 29, 1995. The game was won by the San Francisco 49ers, who defeated the San Diego Chargers by a score of 49-26. Therefore, the San Francisco 49ers won the Super Bowl in the year
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.llms import Clarifaifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSetup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "openai"APP_ID = "chat-completion"MODEL_ID = "GPT-3_5-turbo"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai LLMclarifai_llm = Clarifai( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)# Create LLM chainllm_chain = LLMChain(prompt=prompt, llm=clarifai_llm)Run Chainquestion = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. So, we need to figure out the Super Bowl winner for the 1994 season. The NFL season spans two calendar years, so the Super Bowl for the 1994 season would have taken place in early 1995. \n\nThe Super Bowl in question is Super Bowl XXIX, which was played on January 29, 1995. The game was won by the San Francisco 49ers, who defeated the San Diego Chargers by a score of 49-26. Therefore, the San Francisco 49ers won the Super Bowl in the year
1,525
Francisco 49ers won the Super Bowl in the year Justin Bieber was born.'PreviousChatGLMNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.
Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. ->: Francisco 49ers won the Super Bowl in the year Justin Bieber was born.'PreviousChatGLMNextCohereCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,526
Cohere | 🦜️🔗 Langchain
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Cohere | 🦜️🔗 Langchain
1,527
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCohereCohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This example goes over how to use LangChain to interact with Cohere models.# Install the packagepip install cohere# get a new token: https://dashboard.cohere.ai/from getpass import getpassCOHERE_API_KEY = getpass() ········from langchain.llms import Coherefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Cohere(cohere_api_key=COHERE_API_KEY)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCohereCohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This example goes over how to use LangChain to interact with Cohere models.# Install the packagepip install cohere# get a new token: https://dashboard.cohere.ai/from getpass import getpassCOHERE_API_KEY = getpass() ········from langchain.llms import Coherefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Cohere(cohere_api_key=COHERE_API_KEY)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back
1,528
know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"PreviousClarifaiNextC TransformersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. ->: know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"PreviousClarifaiNextC TransformersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,529
TextGen | 🦜️🔗 Langchain
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. ->: TextGen | 🦜️🔗 Langchain
1,530
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTextGenOn this pageTextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.This example goes over how to use LangChain to interact with LLM models via the text-generation-webui API integration.Please ensure that you have text-generation-webui configured and an LLM installed. Recommended installation via the one-click installer appropriate for your OS.Once text-generation-webui is installed and confirmed working via the web interface, please enable the api option either through the web model configuration tab, or by adding the run-time arg --api to your start command.Set model_url and run the example​model_url = "http://localhost:5000"from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenfrom langchain.globals import set_debugset_debug(True)template = """Question:
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTextGenOn this pageTextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.This example goes over how to use LangChain to interact with LLM models via the text-generation-webui API integration.Please ensure that you have text-generation-webui configured and an LLM installed. Recommended installation via the one-click installer appropriate for your OS.Once text-generation-webui is installed and confirmed working via the web interface, please enable the api option either through the web model configuration tab, or by adding the run-time arg --api to your start command.Set model_url and run the example​model_url = "http://localhost:5000"from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenfrom langchain.globals import set_debugset_debug(True)template = """Question:
1,531
set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)Streaming Version‚ÄãYou should install websocket-client to use this feature.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. ->: set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)Streaming Version‚ÄãYou should install websocket-client to use this feature.
1,532
pip install websocket-clientmodel_url = "ws://localhost:5005"from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.globals import set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)llm = TextGen( model_url = model_url, streaming=True)for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'","\n"]): print(chunk, end='', flush=True)PreviousNebula (Symbl.ai)NextTitan TakeoffSet model_url and run the exampleStreaming VersionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
GitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. ->: pip install websocket-clientmodel_url = "ws://localhost:5005"from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.globals import set_debugset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)llm = TextGen( model_url = model_url, streaming=True)for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'","\n"]): print(chunk, end='', flush=True)PreviousNebula (Symbl.ai)NextTitan TakeoffSet model_url and run the exampleStreaming VersionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,533
SageMakerEndpoint | 🦜️🔗 Langchain
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. ->: SageMakerEndpoint | 🦜️🔗 Langchain
1,534
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsSageMakerEndpointOn this pageSageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.pip3 install langchain boto3Set up​You have to set up following required parameters of the SagemakerEndpoint call:endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region.credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsSageMakerEndpointOn this pageSageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.pip3 install langchain boto3Set up​You have to set up following required parameters of the SagemakerEndpoint call:endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region.credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used.
1,535
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlExample‚Äãfrom langchain.docstore.document import Documentexample_doc_1 = """Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.Therefore, Peter stayed with her at the hospital for 3 days without leaving."""docs = [ Document( page_content=example_doc_1, )]from typing import Dictfrom langchain.prompts import PromptTemplatefrom langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerfrom langchain.chains.question_answering import load_qa_chainimport jsonquery = """How long was Elizabeth hospitalized?"""prompt_template = """Use the following pieces of context to answer the question at the end.{context}Question: {question}Answer:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"]content_handler = ContentHandler()chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature": 1e-10}, content_handler=content_handler, ), prompt=PROMPT,)chain({"input_documents": docs, "question": query}, return_only_outputs=True)PreviousRunhouseNextStochasticAISet
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. ->: See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlExample‚Äãfrom langchain.docstore.document import Documentexample_doc_1 = """Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.Therefore, Peter stayed with her at the hospital for 3 days without leaving."""docs = [ Document( page_content=example_doc_1, )]from typing import Dictfrom langchain.prompts import PromptTemplatefrom langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerfrom langchain.chains.question_answering import load_qa_chainimport jsonquery = """How long was Elizabeth hospitalized?"""prompt_template = """Use the following pieces of context to answer the question at the end.{context}Question: {question}Answer:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"]content_handler = ContentHandler()chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature": 1e-10}, content_handler=content_handler, ), prompt=PROMPT,)chain({"input_documents": docs, "question": query}, return_only_outputs=True)PreviousRunhouseNextStochasticAISet
1,536
upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. ->: upExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,537
LLM Caching integrations | 🦜️🔗 Langchain
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: LLM Caching integrations | 🦜️🔗 Langchain
1,538
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsLLM Caching integrationsOn this pageLLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.from langchain.globals import set_llm_cachefrom langchain.llms import OpenAI# To make the caching really obvious, lets use a slower model.llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)In Memory Cache​from langchain.cache import InMemoryCacheset_llm_cache(InMemoryCache())# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 52.2 ms, sys: 15.2 ms, total: 67.4 ms Wall time: 1.19 s "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 191 µs, sys: 11 µs, total: 202 µs Wall time: 205 µs "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"SQLite Cache​rm .langchain.db# We
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsLLM Caching integrationsOn this pageLLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.from langchain.globals import set_llm_cachefrom langchain.llms import OpenAI# To make the caching really obvious, lets use a slower model.llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)In Memory Cache​from langchain.cache import InMemoryCacheset_llm_cache(InMemoryCache())# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 52.2 ms, sys: 15.2 ms, total: 67.4 ms Wall time: 1.19 s "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 191 µs, sys: 11 µs, total: 202 µs Wall time: 205 µs "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"SQLite Cache​rm .langchain.db# We
1,539
tired!"SQLite Cache‚Äãrm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCacheset_llm_cache(SQLiteCache(database_path=".langchain.db"))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 33.2 ms, sys: 18.1 ms, total: 51.2 ms Wall time: 667 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 4.86 ms, sys: 1.97 ms, total: 6.83 ms Wall time: 5.79 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Upstash Redis Cache‚ÄãStandard Cache‚ÄãUse Upstash Redis to cache prompts and responses with a serverless HTTP API.from upstash_redis import Redisfrom langchain.cache import UpstashRedisCacheURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 7.56 ms, sys: 2.98 ms, total: 10.5 ms Wall time: 1.14 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 2.78 ms, sys: 1.95 ms, total: 4.73 ms Wall time: 82.9 ms '\n\nTwo guys stole a calendar. They got six months each.'Redis Cache‚ÄãStandard Cache‚ÄãUse Redis to cache prompts and responses.# We can do the same thing with a Redis cache# (make sure your local Redis instance is running first before running this example)from redis import Redisfrom langchain.cache import RedisCacheset_llm_cache(RedisCache(redis_=Redis()))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: tired!"SQLite Cache‚Äãrm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCacheset_llm_cache(SQLiteCache(database_path=".langchain.db"))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 33.2 ms, sys: 18.1 ms, total: 51.2 ms Wall time: 667 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 4.86 ms, sys: 1.97 ms, total: 6.83 ms Wall time: 5.79 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Upstash Redis Cache‚ÄãStandard Cache‚ÄãUse Upstash Redis to cache prompts and responses with a serverless HTTP API.from upstash_redis import Redisfrom langchain.cache import UpstashRedisCacheURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 7.56 ms, sys: 2.98 ms, total: 10.5 ms Wall time: 1.14 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 2.78 ms, sys: 1.95 ms, total: 4.73 ms Wall time: 82.9 ms '\n\nTwo guys stole a calendar. They got six months each.'Redis Cache‚ÄãStandard Cache‚ÄãUse Redis to cache prompts and responses.# We can do the same thing with a Redis cache# (make sure your local Redis instance is running first before running this example)from redis import Redisfrom langchain.cache import RedisCacheset_llm_cache(RedisCache(redis_=Redis()))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it
1,540
get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Semantic Cache​Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.cache import RedisSemanticCacheset_llm_cache( RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."# The second time, while not a direct hit, the question is semantically similar to the original question,# so it uses the cached result!llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."GPTCache​We can use GPTCache for exact match caching OR to cache results based on semantic similarityLet's first start with an example of exact matchfrom gptcache import Cachefrom gptcache.manager.factory import manager_factoryfrom gptcache.processor.pre import get_promptfrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), )set_llm_cache(GPTCache(init_gptcache))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Semantic Cache​Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.cache import RedisSemanticCacheset_llm_cache( RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."# The second time, while not a direct hit, the question is semantically similar to the original question,# so it uses the cached result!llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."GPTCache​We can use GPTCache for exact match caching OR to cache results based on semantic similarityLet's first start with an example of exact matchfrom gptcache import Cachefrom gptcache.manager.factory import manager_factoryfrom gptcache.processor.pre import get_promptfrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), )set_llm_cache(GPTCache(init_gptcache))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get
1,541
did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom gptcache.adapter.api import init_similar_cachefrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")set_llm_cache(GPTCache(init_gptcache))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is an exact match, so it finds it in the cachellm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is not an exact match, but semantically within distance so it hits!llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Momento Cache​Use Momento to cache prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.from datetime import timedeltafrom langchain.cache import MomentoCachecache_name = "langchain"ttl =
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom gptcache.adapter.api import init_similar_cachefrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")set_llm_cache(GPTCache(init_gptcache))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is an exact match, so it finds it in the cachellm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is not an exact match, but semantically within distance so it hits!llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Momento Cache​Use Momento to cache prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.from datetime import timedeltafrom langchain.cache import MomentoCachecache_name = "langchain"ttl =
1,542
import MomentoCachecache_name = "langchain"ttl = timedelta(days=1)set_llm_cache(MomentoCache.from_client_params(cache_name, ttl))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes faster# When run in the same region as the cache, latencies are single digit msllm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'SQLAlchemy Cache‚ÄãYou can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.# from langchain.cache import SQLAlchemyCache# from sqlalchemy import create_engine# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")# set_llm_cache(SQLAlchemyCache(engine))Custom SQLAlchemy Schemas‚Äã# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:from sqlalchemy import Column, Integer, String, Computed, Index, Sequencefrom sqlalchemy import create_enginefrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy_utils import TSVectorTypefrom langchain.cache import SQLAlchemyCacheBase = declarative_base()class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence("cache_id"), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column( TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True), ) __table_args__ = ( Index("idx_fulltext_prompt_tsv",
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: import MomentoCachecache_name = "langchain"ttl = timedelta(days=1)set_llm_cache(MomentoCache.from_client_params(cache_name, ttl))# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes faster# When run in the same region as the cache, latencies are single digit msllm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'SQLAlchemy Cache‚ÄãYou can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.# from langchain.cache import SQLAlchemyCache# from sqlalchemy import create_engine# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")# set_llm_cache(SQLAlchemyCache(engine))Custom SQLAlchemy Schemas‚Äã# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:from sqlalchemy import Column, Integer, String, Computed, Index, Sequencefrom sqlalchemy import create_enginefrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy_utils import TSVectorTypefrom langchain.cache import SQLAlchemyCacheBase = declarative_base()class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence("cache_id"), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column( TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True), ) __table_args__ = ( Index("idx_fulltext_prompt_tsv",
1,543
= ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), )engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")set_llm_cache(SQLAlchemyCache(engine, FulltextLLMCache))Cassandra caches​You can use Cassandra / Astra DB for caching LLM responses, choosing from the exact-match CassandraCache or the (vector-similarity-based) CassandraSemanticCache.Let's see both in action in the following cells.Connect to the DB​First you need to establish a Session to the DB and to specify a keyspace for the cache table(s). The following gets you started with an Astra DB instance (see e.g. here for more backends and connection options).import getpasskeyspace = input("\nKeyspace name? ")ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ')ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ") Keyspace name? my_keyspace Astra DB Token ("AstraCS:...") ········ Full path to your Secure Connect Bundle? /path/to/secure-connect-databasename.zipfrom cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProvidercluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider("token", ASTRA_DB_APPLICATION_TOKEN),)session = cluster.connect()Exact cache​This will avoid invoking the LLM when the supplied prompt is exactly the same as one encountered already:from langchain.globals import set_llm_cachefrom langchain.cache import CassandraCacheset_llm_cache(CassandraCache(session=session, keyspace=keyspace))print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU times: user 41.7 ms, sys: 153 µs, total: 41.8 ms Wall time: 1.96 sprint(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), )engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")set_llm_cache(SQLAlchemyCache(engine, FulltextLLMCache))Cassandra caches​You can use Cassandra / Astra DB for caching LLM responses, choosing from the exact-match CassandraCache or the (vector-similarity-based) CassandraSemanticCache.Let's see both in action in the following cells.Connect to the DB​First you need to establish a Session to the DB and to specify a keyspace for the cache table(s). The following gets you started with an Astra DB instance (see e.g. here for more backends and connection options).import getpasskeyspace = input("\nKeyspace name? ")ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ')ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ") Keyspace name? my_keyspace Astra DB Token ("AstraCS:...") ········ Full path to your Secure Connect Bundle? /path/to/secure-connect-databasename.zipfrom cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProvidercluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider("token", ASTRA_DB_APPLICATION_TOKEN),)session = cluster.connect()Exact cache​This will avoid invoking the LLM when the supplied prompt is exactly the same as one encountered already:from langchain.globals import set_llm_cachefrom langchain.cache import CassandraCacheset_llm_cache(CassandraCache(session=session, keyspace=keyspace))print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU times: user 41.7 ms, sys: 153 µs, total: 41.8 ms Wall time: 1.96 sprint(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU
1,544
because it is tidally locked to Earth. CPU times: user 4.09 ms, sys: 0 ns, total: 4.09 ms Wall time: 119 msSemantic cache​This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an Embeddings instance of your choice.from langchain.embeddings import OpenAIEmbeddingsembedding=OpenAIEmbeddings()from langchain.cache import CassandraSemanticCacheset_llm_cache( CassandraSemanticCache( session=session, keyspace=keyspace, embedding=embedding, table_name="cass_sem_cache" ))print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 21.3 ms, sys: 177 µs, total: 21.4 ms Wall time: 3.09 sprint(llm("How come we always see one face of the moon?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 10.9 ms, sys: 17 µs, total: 10.9 ms Wall time: 461 msOptional Caching​You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLMllm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.'Optional Caching in Chains​You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.As an example, we will load a summarizer
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: because it is tidally locked to Earth. CPU times: user 4.09 ms, sys: 0 ns, total: 4.09 ms Wall time: 119 msSemantic cache​This cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an Embeddings instance of your choice.from langchain.embeddings import OpenAIEmbeddingsembedding=OpenAIEmbeddings()from langchain.cache import CassandraSemanticCacheset_llm_cache( CassandraSemanticCache( session=session, keyspace=keyspace, embedding=embedding, table_name="cass_sem_cache" ))print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 21.3 ms, sys: 177 µs, total: 21.4 ms Wall time: 3.09 sprint(llm("How come we always see one face of the moon?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 10.9 ms, sys: 17 µs, total: 10.9 ms Wall time: 461 msOptional Caching​You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLMllm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.'Optional Caching in Chains​You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.As an example, we will load a summarizer
1,545
an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.llm = OpenAI(model_name="text-davinci-002")no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)from langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChaintext_splitter = CharacterTextSplitter()with open("../../modules/state_of_the_union.txt") as f: state_of_the_union = f.read()texts = text_splitter.split_text(state_of_the_union)from langchain.docstore.document import Documentdocs = [Document(page_content=t) for t in texts[:3]]from langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.llm = OpenAI(model_name="text-davinci-002")no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)from langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChaintext_splitter = CharacterTextSplitter()with open("../../modules/state_of_the_union.txt") as f: state_of_the_union = f.read()texts = text_splitter.split_text(state_of_the_union)from langchain.docstore.document import Documentdocs = [Document(page_content=t) for t in texts[:3]]from langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will
1,546
and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'rm .langchain.db sqlite.dbPreviousLlama.cppNextManifestIn Memory CacheSQLite CacheUpstash Redis CacheStandard CacheRedis CacheStandard CacheSemantic CacheGPTCacheMomento CacheSQLAlchemy CacheCustom SQLAlchemy SchemasCassandra cachesExact cacheSemantic cacheOptional CachingOptional Caching in ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook covers how to cache results of individual LLM calls using different caches.
This notebook covers how to cache results of individual LLM calls using different caches. ->: and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'rm .langchain.db sqlite.dbPreviousLlama.cppNextManifestIn Memory CacheSQLite CacheUpstash Redis CacheStandard CacheRedis CacheStandard CacheSemantic CacheGPTCacheMomento CacheSQLAlchemy CacheCustom SQLAlchemy SchemasCassandra cachesExact cacheSemantic cacheOptional CachingOptional Caching in ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,547
Huggingface TextGen Inference | 🦜️🔗 Langchain
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets. ->: Huggingface TextGen Inference | 🦜️🔗 Langchain
1,548
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsHuggingface TextGen InferenceOn this pageHuggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.This notebooks goes over how to use a self hosted LLM using Text Generation Inference.To use, you should have the text_generation python package installed.# !pip3 install text_generationfrom langchain.llms import HuggingFaceTextGenInferencellm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?")Streaming​from langchain.llms import HuggingFaceTextGenInferencefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/",
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsHuggingface TextGen InferenceOn this pageHuggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.This notebooks goes over how to use a self hosted LLM using Text Generation Inference.To use, you should have the text_generation python package installed.# !pip3 install text_generationfrom langchain.llms import HuggingFaceTextGenInferencellm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?")Streaming​from langchain.llms import HuggingFaceTextGenInferencefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/",
1,549
inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True)llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])PreviousHugging Face Local PipelinesNextJavelin AI Gateway TutorialStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets. ->: inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True)llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])PreviousHugging Face Local PipelinesNextJavelin AI Gateway TutorialStreamingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,550
PipelineAI | 🦜️🔗 Langchain
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: PipelineAI | 🦜️🔗 Langchain
1,551
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsPipelineAIOn this pagePipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.This notebook goes over how to use Langchain with PipelineAI.PipelineAI example​This example shows how PipelineAI integrated with LangChain and it is created by PipelineAI.Setup​The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai.# Install the packagepip install pipeline-aiExample​Imports​import osfrom langchain.llms import PipelineAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You'll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models.os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE"Create the
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsPipelineAIOn this pagePipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.This notebook goes over how to use Langchain with PipelineAI.PipelineAI example​This example shows how PipelineAI integrated with LangChain and it is created by PipelineAI.Setup​The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai.# Install the packagepip install pipeline-aiExample​Imports​import osfrom langchain.llms import PipelineAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You'll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models.os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE"Create the
1,552
= "YOUR_API_KEY_HERE"Create the PipelineAI instance​When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments:llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...})Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousPetalsNextPredibasePipelineAI exampleSetupExampleImportsSet the Environment API KeyCreate the PipelineAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.
PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. ->: = "YOUR_API_KEY_HERE"Create the PipelineAI instance​When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments:llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...})Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousPetalsNextPredibasePipelineAI exampleSetupExampleImportsSet the Environment API KeyCreate the PipelineAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,553
vLLM | 🦜️🔗 Langchain
vLLM is a fast and easy-to-use library for LLM inference and serving, offering:
vLLM is a fast and easy-to-use library for LLM inference and serving, offering: ->: vLLM | 🦜️🔗 Langchain
1,554
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsvLLMOn this pagevLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:State-of-the-art serving throughput Efficient management of attention key and value memory with PagedAttentionContinuous batching of incoming requestsOptimized CUDA kernelsThis notebooks goes over how to use a LLM with langchain and vLLM.To use, you should have the vllm python package installed.#!pip install vllm -qfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-7b", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8,)print(llm("What is the capital of France ?")) INFO 08-06 11:37:33 llm_engine.py:70] Initializing an LLM engine with config: model='mosaicml/mpt-7b', tokenizer='mosaicml/mpt-7b', tokenizer_mode=auto, trust_remote_code=True, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None,
vLLM is a fast and easy-to-use library for LLM inference and serving, offering:
vLLM is a fast and easy-to-use library for LLM inference and serving, offering: ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsvLLMOn this pagevLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:State-of-the-art serving throughput Efficient management of attention key and value memory with PagedAttentionContinuous batching of incoming requestsOptimized CUDA kernelsThis notebooks goes over how to use a LLM with langchain and vLLM.To use, you should have the vllm python package installed.#!pip install vllm -qfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-7b", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8,)print(llm("What is the capital of France ?")) INFO 08-06 11:37:33 llm_engine.py:70] Initializing an LLM engine with config: model='mosaicml/mpt-7b', tokenizer='mosaicml/mpt-7b', tokenizer_mode=auto, trust_remote_code=True, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None,
1,555
use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0) INFO 08-06 11:37:41 llm_engine.py:196] # GPU blocks: 861, # CPU blocks: 512 Processed prompts: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:00<00:00, 2.00it/s] What is the capital of France ? The capital of France is Paris. Integrate the model in an LLMChain‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Processed prompts: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:01<00:00, 1.34s/it] 1. The first Pokemon game was released in 1996. 2. The president was Bill Clinton. 3. Clinton was president from 1993 to 2001. 4. The answer is Clinton. Distributed Inference‚ÄãvLLM supports distributed tensor-parallel inference and serving. To run multi-GPU inference with the LLM class, set the tensor_parallel_size argument to the number of GPUs you want to use. For example, to run inference on 4 GPUsfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-30b", tensor_parallel_size=4, trust_remote_code=True, # mandatory for hf models)llm("What is the future of AI?")OpenAI-Compatible Server‚ÄãvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API.This server can be queried in the same format as OpenAI API.OpenAI-Compatible Completion‚Äãfrom langchain.llms import VLLMOpenAIllm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="tiiuae/falcon-7b", model_kwargs={"stop": ["."]})print(llm("Rome is")) a city that is filled with
vLLM is a fast and easy-to-use library for LLM inference and serving, offering:
vLLM is a fast and easy-to-use library for LLM inference and serving, offering: ->: use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0) INFO 08-06 11:37:41 llm_engine.py:196] # GPU blocks: 861, # CPU blocks: 512 Processed prompts: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:00<00:00, 2.00it/s] What is the capital of France ? The capital of France is Paris. Integrate the model in an LLMChain‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Processed prompts: 100%|‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà‚ñà| 1/1 [00:01<00:00, 1.34s/it] 1. The first Pokemon game was released in 1996. 2. The president was Bill Clinton. 3. Clinton was president from 1993 to 2001. 4. The answer is Clinton. Distributed Inference‚ÄãvLLM supports distributed tensor-parallel inference and serving. To run multi-GPU inference with the LLM class, set the tensor_parallel_size argument to the number of GPUs you want to use. For example, to run inference on 4 GPUsfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-30b", tensor_parallel_size=4, trust_remote_code=True, # mandatory for hf models)llm("What is the future of AI?")OpenAI-Compatible Server‚ÄãvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API.This server can be queried in the same format as OpenAI API.OpenAI-Compatible Completion‚Äãfrom langchain.llms import VLLMOpenAIllm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="tiiuae/falcon-7b", model_kwargs={"stop": ["."]})print(llm("Rome is")) a city that is filled with
1,556
is")) a city that is filled with history, ancient buildings, and art around every cornerPreviousTongyi QwenNextWriterIntegrate the model in an LLMChainDistributed InferenceOpenAI-Compatible ServerOpenAI-Compatible CompletionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
vLLM is a fast and easy-to-use library for LLM inference and serving, offering:
vLLM is a fast and easy-to-use library for LLM inference and serving, offering: ->: is")) a city that is filled with history, ancient buildings, and art around every cornerPreviousTongyi QwenNextWriterIntegrate the model in an LLMChainDistributed InferenceOpenAI-Compatible ServerOpenAI-Compatible CompletionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,557
DeepInfra | 🦜️🔗 Langchain
DeepInfra provides several LLMs.
DeepInfra provides several LLMs. ->: DeepInfra | 🦜️🔗 Langchain
1,558
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDeepInfraOn this pageDeepInfraDeepInfra provides several LLMs.This notebook goes over how to use Langchain with DeepInfra.Imports​import osfrom langchain.llms import DeepInfrafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from DeepInfra. You have to Login and get a new token.You are given a 1 hour free of serverless GPU compute to test different models. (see here)
DeepInfra provides several LLMs.
DeepInfra provides several LLMs. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDeepInfraOn this pageDeepInfraDeepInfra provides several LLMs.This notebook goes over how to use Langchain with DeepInfra.Imports​import osfrom langchain.llms import DeepInfrafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from DeepInfra. You have to Login and get a new token.You are given a 1 hour free of serverless GPU compute to test different models. (see here)
1,559
You can print your token with deepctl auth token# get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENCreate the DeepInfra instance​You can also use our open-source deepctl tool to manage your model deployments. You can view a list of available parameters here.llm = DeepInfra(model_id="databricks/dolly-v2-12b")llm.model_kwargs = { "temperature": 0.7, "repetition_penalty": 1.2, "max_new_tokens": 250, "top_p": 0.9,}Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "Can penguins reach the North pole?"llm_chain.run(question) "Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation machine,\nmake it spin around its vertical axis,\nand finally drop the penguin in North hemisphere.\nNow, you have a penguin in the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher."PreviousDatabricksNextDeepSparseImportsSet the Environment API KeyCreate the DeepInfra instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
DeepInfra provides several LLMs.
DeepInfra provides several LLMs. ->: You can print your token with deepctl auth token# get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENCreate the DeepInfra instance​You can also use our open-source deepctl tool to manage your model deployments. You can view a list of available parameters here.llm = DeepInfra(model_id="databricks/dolly-v2-12b")llm.model_kwargs = { "temperature": 0.7, "repetition_penalty": 1.2, "max_new_tokens": 250, "top_p": 0.9,}Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "Can penguins reach the North pole?"llm_chain.run(question) "Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation machine,\nmake it spin around its vertical axis,\nand finally drop the penguin in North hemisphere.\nNow, you have a penguin in the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher."PreviousDatabricksNextDeepSparseImportsSet the Environment API KeyCreate the DeepInfra instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,560
Titan Takeoff | 🦜️🔗 Langchain
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. ->: Titan Takeoff | 🦜️🔗 Langchain
1,561
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTitan TakeoffOn this pageTitan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. Our inference server, Titan Takeoff enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more.Installation​To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu support, then you will need to install docker with cuda support.For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.Run the following command to install the Iris CLI that will enable you to run the takeoff server:pip install titan-irisChoose a
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTitan TakeoffOn this pageTitan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. Our inference server, Titan Takeoff enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more.Installation​To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu support, then you will need to install docker with cuda support.For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.Run the following command to install the Iris CLI that will enable you to run the takeoff server:pip install titan-irisChoose a
1,562
the takeoff server:pip install titan-irisChoose a Model‚ÄãTakeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the supported models for more information. For information about using your own models, see the custom models.Going forward in this demo we will be using the falcon 7B instruct model. This is a good open-source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.Taking off‚ÄãModels are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifying cuda for the device flag.To start the takeoff server, run:iris takeoff --model tiiuae/falcon-7b-instruct --device cpuiris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU requirediris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)You will then be directed to a login page, where you will need to create an account to proceed.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. ->: the takeoff server:pip install titan-irisChoose a Model‚ÄãTakeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the supported models for more information. For information about using your own models, see the custom models.Going forward in this demo we will be using the falcon 7B instruct model. This is a good open-source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.Taking off‚ÄãModels are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifying cuda for the device flag.To start the takeoff server, run:iris takeoff --model tiiuae/falcon-7b-instruct --device cpuiris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU requirediris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)You will then be directed to a login page, where you will need to create an account to proceed.
1,563
After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.iris takeoff --shutdown # shutdown the serverInferencing your model​To access your LLM, use the TitanTakeoff LLM wrapper:from langchain.llms import TitanTakeoffllm = TitanTakeoff( baseURL="http://localhost:8000", generate_max_length=128, temperature=1.0)prompt = "What is the largest planet in the solar system?"llm(prompt)No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and generation parameters can be supplied.Streaming​Streaming is also supported via the streaming flag:from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.callbacks.manager import CallbackManagerllm = TitanTakeoff(callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)prompt = "What is the capital of France?"llm(prompt)Integration with LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = TitanTakeoff()template = "What is the capital of {country}"prompt = PromptTemplate(template=template, input_variables=["country"])llm_chain = LLMChain(llm=llm, prompt=prompt)generated = llm_chain.run(country="Belgium")print(generated)PreviousTextGenNextTogether AIInstallationChoose a ModelTaking offInferencing your modelStreamingIntegration with LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.
TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. ->: After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.iris takeoff --shutdown # shutdown the serverInferencing your model​To access your LLM, use the TitanTakeoff LLM wrapper:from langchain.llms import TitanTakeoffllm = TitanTakeoff( baseURL="http://localhost:8000", generate_max_length=128, temperature=1.0)prompt = "What is the largest planet in the solar system?"llm(prompt)No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and generation parameters can be supplied.Streaming​Streaming is also supported via the streaming flag:from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.callbacks.manager import CallbackManagerllm = TitanTakeoff(callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)prompt = "What is the capital of France?"llm(prompt)Integration with LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = TitanTakeoff()template = "What is the capital of {country}"prompt = PromptTemplate(template=template, input_variables=["country"])llm_chain = LLMChain(llm=llm, prompt=prompt)generated = llm_chain.run(country="Belgium")print(generated)PreviousTextGenNextTogether AIInstallationChoose a ModelTaking offInferencing your modelStreamingIntegration with LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,564
RELLM | 🦜️🔗 Langchain
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. ->: RELLM | 🦜️🔗 Langchain
1,565
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsRELLMOn this pageRELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression.Warning - this module is still experimentalpip install rellm > /dev/nullHugging Face Baseline​First, let's establish a qualitative baseline by checking the output of the model without structured decoding.import logginglogging.basicConfig(level=logging.ERROR)prompt = """Human: "What's the capital of the United States?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C."}Human: "What's the capital of Pennsylvania?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg."}Human: "What 2 + 5?"AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7."}Human: 'What's the
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsRELLMOn this pageRELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression.Warning - this module is still experimentalpip install rellm > /dev/nullHugging Face Baseline​First, let's establish a qualitative baseline by checking the output of the model without structured decoding.import logginglogging.basicConfig(level=logging.ERROR)prompt = """Human: "What's the capital of the United States?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C."}Human: "What's the capital of Pennsylvania?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg."}Human: "What 2 + 5?"AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7."}Human: 'What's the
1,566
"action_input": "2 + 5 = 7."}Human: 'What's the capital of Maryland?'AI Assistant:"""from transformers import pipelinefrom langchain.llms import HuggingFacePipelinehf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)original_model = HuggingFacePipeline(pipeline=hf_model)generated = original_model.generate([prompt], stop=["Human:"])print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=NoneThat's not so impressive, is it? It didn't answer the question and it didn't follow the JSON format at all! Let's try with the structured decoder.RELLM LLM Wrapper​Let's try that again, now providing a regex to match the JSON structured format.import regex # Note this is the regex library NOT python's re stdlib module# We'll choose a regex that matches to a structured json string that looks like:# {# "action": "Final Answer",# "action_input": string or dict# }pattern = regex.compile( r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:')from langchain_experimental.llms import RELLMmodel = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)generated = model.predict(prompt, stop=["Human:"])print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors.PreviousPromptLayer OpenAINextReplicateHugging Face BaselineRELLM LLM WrapperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding.
RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. ->: "action_input": "2 + 5 = 7."}Human: 'What's the capital of Maryland?'AI Assistant:"""from transformers import pipelinefrom langchain.llms import HuggingFacePipelinehf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)original_model = HuggingFacePipeline(pipeline=hf_model)generated = original_model.generate([prompt], stop=["Human:"])print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=NoneThat's not so impressive, is it? It didn't answer the question and it didn't follow the JSON format at all! Let's try with the structured decoder.RELLM LLM Wrapper​Let's try that again, now providing a regex to match the JSON structured format.import regex # Note this is the regex library NOT python's re stdlib module# We'll choose a regex that matches to a structured json string that looks like:# {# "action": "Final Answer",# "action_input": string or dict# }pattern = regex.compile( r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:')from langchain_experimental.llms import RELLMmodel = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)generated = model.predict(prompt, stop=["Human:"])print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors.PreviousPromptLayer OpenAINextReplicateHugging Face BaselineRELLM LLM WrapperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,567
OpenLLM | 🦜️🔗 Langchain
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps. ->: OpenLLM | 🦜️🔗 Langchain
1,568
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpenLLMOn this pageOpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation​Install openllm through PyPIpip install openllmLaunch OpenLLM server locally​To start an LLM server, use openllm start command. For example, to start a dolly-v2 server, run the following command from a terminal:openllm start dolly-v2Wrapper​from langchain.llms import OpenLLMserver_url = "http://localhost:3000" # Replace with remote host if you are running on a remote serverllm = OpenLLM(server_url=server_url)Optional: Local LLM Inference​You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.When moving LLM applications to
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpenLLMOn this pageOpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation​Install openllm through PyPIpip install openllmLaunch OpenLLM server locally​To start an LLM server, use openllm start command. For example, to start a dolly-v2 server, run the following command from a terminal:openllm start dolly-v2Wrapper​from langchain.llms import OpenLLMserver_url = "http://localhost:3000" # Replace with remote host if you are running on a remote serverllm = OpenLLM(server_url=server_url)Optional: Local LLM Inference​You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.When moving LLM applications to
1,569
types of LLMs.When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the server_url option demonstrated above.To load an LLM locally via the LangChain wrapper:from langchain.llms import OpenLLMllm = OpenLLM( model_name="dolly-v2", model_id="databricks/dolly-v2-3b", temperature=0.94, repetition_penalty=1.2,)Integrate with a LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = "What is a good name for a company that makes {product}?"prompt = PromptTemplate(template=template, input_variables=["product"])llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.run(product="mechanical keyboard")print(generated) iLkbPreviousOpenAINextOpenLMInstallationLaunch OpenLLM server locallyWrapperOptional: Local LLM InferenceIntegrate with a LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps. ->: types of LLMs.When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the server_url option demonstrated above.To load an LLM locally via the LangChain wrapper:from langchain.llms import OpenLLMllm = OpenLLM( model_name="dolly-v2", model_id="databricks/dolly-v2-3b", temperature=0.94, repetition_penalty=1.2,)Integrate with a LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = "What is a good name for a company that makes {product}?"prompt = PromptTemplate(template=template, input_variables=["product"])llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.run(product="mechanical keyboard")print(generated) iLkbPreviousOpenAINextOpenLMInstallationLaunch OpenLLM server locallyWrapperOptional: Local LLM InferenceIntegrate with a LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,570
Beam | 🦜️🔗 Langchain
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. ->: Beam | 🦜️🔗 Langchain
1,571
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBeamBeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.Create an account, if you don't have one already. Grab your API keys from the dashboard.Install the Beam CLIcurl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | shRegister API Keys and set your beam client id and secret environment variables:import osimport subprocessbeam_client_id = "<Your beam client id>"beam_client_secret = "<Your beam client secret>"# Set the environment variablesos.environ["BEAM_CLIENT_ID"] = beam_client_idos.environ["BEAM_CLIENT_SECRET"] = beam_client_secret# Run the beam configure
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBeamBeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.Create an account, if you don't have one already. Grab your API keys from the dashboard.Install the Beam CLIcurl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | shRegister API Keys and set your beam client id and secret environment variables:import osimport subprocessbeam_client_id = "<Your beam client id>"beam_client_secret = "<Your beam client secret>"# Set the environment variablesos.environ["BEAM_CLIENT_ID"] = beam_client_idos.environ["BEAM_CLIENT_SECRET"] = beam_client_secret# Run the beam configure
1,572
= beam_client_secret# Run the beam configure commandbeam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}Install the Beam SDK:pip install beam-sdkDeploy and call Beam directly from langchain!Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster!from langchain.llms.beam import Beamllm = Beam( model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers", ], max_length="50", verbose=False,)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response)PreviousBasetenNextBedrockCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. ->: = beam_client_secret# Run the beam configure commandbeam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}Install the Beam SDK:pip install beam-sdkDeploy and call Beam directly from langchain!Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster!from langchain.llms.beam import Beamllm = Beam( model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers", ], max_length="50", verbose=False,)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response)PreviousBasetenNextBedrockCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,573
DeepSparse | 🦜️🔗 Langchain
This page covers how to use the DeepSparse inference runtime within LangChain.
This page covers how to use the DeepSparse inference runtime within LangChain. ->: DeepSparse | 🦜️🔗 Langchain
1,574
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDeepSparseOn this pageDeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.
This page covers how to use the DeepSparse inference runtime within LangChain.
This page covers how to use the DeepSparse inference runtime within LangChain. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDeepSparseOn this pageDeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.
1,575
It is broken into two parts: installation and setup, and then examples of DeepSparse usage.Installation and Setup​Install the Python package with pip install deepsparseChoose a SparseZoo model or export a support model to ONNX using OptimumThere exists a DeepSparse LLM wrapper, that provides a unified interface for all models:from langchain.llms import DeepSparsellm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')print(llm('def fib():'))Additional parameters can be passed using the config parameter:config = {'max_generated_tokens': 256}llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)PreviousDeepInfraNextEden AIInstallation and SetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This page covers how to use the DeepSparse inference runtime within LangChain.
This page covers how to use the DeepSparse inference runtime within LangChain. ->: It is broken into two parts: installation and setup, and then examples of DeepSparse usage.Installation and Setup​Install the Python package with pip install deepsparseChoose a SparseZoo model or export a support model to ONNX using OptimumThere exists a DeepSparse LLM wrapper, that provides a unified interface for all models:from langchain.llms import DeepSparsellm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')print(llm('def fib():'))Additional parameters can be passed using the config parameter:config = {'max_generated_tokens': 256}llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)PreviousDeepInfraNextEden AIInstallation and SetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,576
Databricks | 🦜️🔗 Langchain
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: Databricks | 🦜️🔗 Langchain
1,577
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
1,578
It supports two endpoint types:Serving endpoint, recommended for production and development,Cluster driver proxy app, recommended for iteractive development.from langchain.llms import DatabricksWrapping a serving endpoint‚ÄãPrerequisites:An LLM was registered and deployed to a Databricks serving endpoint.You have "Can Query" permission to the endpoint.The expected MLflow model signature is:inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]outputs: [{"type": "string"}]If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly.# If running a Databricks notebook attached to an interactive cluster in "single user"# or "no isolation shared" mode, you only need to specify the endpoint name to create# a `Databricks` instance to query a serving endpoint in the same workspace.llm = Databricks(endpoint_name="dolly")llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.'llm("How are you?", stop=["."]) 'Good'# Otherwise, you can manually specify the Databricks workspace hostname and personal access token# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens# We strongly recommend not exposing the API token explicitly inside a notebook.# You can use Databricks secret manager to store your API token securely.# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecretsimport osos.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")llm("How are you?") 'I am fine. Thank you!'# If the serving endpoint accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature":
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: It supports two endpoint types:Serving endpoint, recommended for production and development,Cluster driver proxy app, recommended for iteractive development.from langchain.llms import DatabricksWrapping a serving endpoint‚ÄãPrerequisites:An LLM was registered and deployed to a Databricks serving endpoint.You have "Can Query" permission to the endpoint.The expected MLflow model signature is:inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]outputs: [{"type": "string"}]If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly.# If running a Databricks notebook attached to an interactive cluster in "single user"# or "no isolation shared" mode, you only need to specify the endpoint name to create# a `Databricks` instance to query a serving endpoint in the same workspace.llm = Databricks(endpoint_name="dolly")llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.'llm("How are you?", stop=["."]) 'Good'# Otherwise, you can manually specify the Databricks workspace hostname and personal access token# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens# We strongly recommend not exposing the API token explicitly inside a notebook.# You can use Databricks secret manager to store your API token securely.# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecretsimport osos.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")llm("How are you?") 'I am fine. Thank you!'# If the serving endpoint accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature":
1,579
model_kwargs={"temperature": 0.1})llm("How are you?") 'I am fine.'# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestllm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)llm("How are you?") 'I’m Excellent. You?'Wrapping a cluster driver proxy app​Prerequisites:An LLM loaded on a Databricks interactive cluster in "single user" or "no isolation shared" mode.A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output.It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only.You have "Can Attach To" permission to the cluster.The expected server schema (using JSON schema) is:inputs:{"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]}outputs: {"type": "string"}If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly.The following is a minimal example for running a driver proxy app to serve an LLM:from flask import Flask, request, jsonifyimport torchfrom transformers import pipeline, AutoTokenizer, StoppingCriteriamodel = "databricks/dolly-v2-3b"tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")device = dolly.deviceclass CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device)
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: model_kwargs={"temperature": 0.1})llm("How are you?") 'I am fine.'# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestllm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)llm("How are you?") 'I’m Excellent. You?'Wrapping a cluster driver proxy app​Prerequisites:An LLM loaded on a Databricks interactive cluster in "single user" or "no isolation shared" mode.A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output.It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only.You have "Can Attach To" permission to the cluster.The expected server schema (using JSON schema) is:inputs:{"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]}outputs: {"type": "string"}If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly.The following is a minimal example for running a driver proxy app to serve an LLM:from flask import Flask, request, jsonifyimport torchfrom transformers import pipeline, AutoTokenizer, StoppingCriteriamodel = "databricks/dolly-v2-3b"tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")device = dolly.deviceclass CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device)
1,580
return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return Falsedef llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched)app = Flask("dolly")@app.route('/', methods=['POST'])def serve_llm(): resp = llm(**request.json) return jsonify(resp)app.run(host="0.0.0.0", port="7777")Once the server is running, you can create a Databricks instance to wrap it as an LLM.# If running a Databricks notebook attached to the same cluster that runs the app,# you only need to specify the driver port to create a `Databricks` instance.llm = Databricks(cluster_driver_port="7777")llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.'# Otherwise, you can manually specify the cluster ID to use,# as well as Databricks workspace hostname and personal access token.llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")llm("How are you?") 'I am well. You?'# If the app accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am very well. It is a pleasure to meet you.'# Use `transform_input_fn` and `transform_output_fn` if the app# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestdef transform_output(response): return response.upper()llm = Databricks(
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return Falsedef llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched)app = Flask("dolly")@app.route('/', methods=['POST'])def serve_llm(): resp = llm(**request.json) return jsonify(resp)app.run(host="0.0.0.0", port="7777")Once the server is running, you can create a Databricks instance to wrap it as an LLM.# If running a Databricks notebook attached to the same cluster that runs the app,# you only need to specify the driver port to create a `Databricks` instance.llm = Databricks(cluster_driver_port="7777")llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.'# Otherwise, you can manually specify the cluster ID to use,# as well as Databricks workspace hostname and personal access token.llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")llm("How are you?") 'I am well. You?'# If the app accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am very well. It is a pleasure to meet you.'# Use `transform_input_fn` and `transform_output_fn` if the app# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestdef transform_output(response): return response.upper()llm = Databricks(
1,581
return response.upper()llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output,)llm("How are you?") 'I AM DOING GREAT THANK YOU.'PreviousCTranslate2NextDeepInfraWrapping a serving endpointWrapping a cluster driver proxy appCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.
The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. ->: return response.upper()llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output,)llm("How are you?") 'I AM DOING GREAT THANK YOU.'PreviousCTranslate2NextDeepInfraWrapping a serving endpointWrapping a cluster driver proxy appCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,582
Banana | 🦜️🔗 Langchain
Banana is focused on building the machine learning infrastructure.
Banana is focused on building the machine learning infrastructure. ->: Banana | 🦜️🔗 Langchain
1,583
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBananaBananaBanana is focused on building the machine learning infrastructure.This example goes over how to use LangChain to interact with Banana models# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/pythonpip install banana-dev# get new tokens: https://app.banana.dev/# We need three parameters to make a Banana.dev API call:# * a team api key# * the model's unique key# * the model's url slugimport osfrom getpass import getpass# You can get this from the main dashboard# at https://app.banana.devos.environ["BANANA_API_KEY"] = "YOUR_API_KEY"# OR# BANANA_API_KEY = getpass()from langchain.llms import Bananafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# Both of these are found in your model's # detail page in
Banana is focused on building the machine learning infrastructure.
Banana is focused on building the machine learning infrastructure. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBananaBananaBanana is focused on building the machine learning infrastructure.This example goes over how to use LangChain to interact with Banana models# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/pythonpip install banana-dev# get new tokens: https://app.banana.dev/# We need three parameters to make a Banana.dev API call:# * a team api key# * the model's unique key# * the model's url slugimport osfrom getpass import getpass# You can get this from the main dashboard# at https://app.banana.devos.environ["BANANA_API_KEY"] = "YOUR_API_KEY"# OR# BANANA_API_KEY = getpass()from langchain.llms import Bananafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# Both of these are found in your model's # detail page in
1,584
these are found in your model's # detail page in https://app.banana.devllm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBaidu QianfanNextBasetenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Banana is focused on building the machine learning infrastructure.
Banana is focused on building the machine learning infrastructure. ->: these are found in your model's # detail page in https://app.banana.devllm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBaidu QianfanNextBasetenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,585
Arcee | 🦜️🔗 Langchain
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs). ->: Arcee | 🦜️🔗 Langchain
1,586
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsArceeOn this pageArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).Setup​Before using Arcee, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.llms import Arcee# Create an instance of the Arcee classarcee = Arcee( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional Configuration​You can also configure Arcee's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs). ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsArceeOn this pageArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).Setup​Before using Arcee, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.llms import Arcee# Create an instance of the Arcee classarcee = Arcee( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional Configuration​You can also configure Arcee's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
1,587
Setting the model_kwargs at the object initialization uses the parameters as default for all the subsequent calls to the generate response.arcee = Arcee( model="DALM-Patent", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Generating Text​You can generate text from Arcee by providing a prompt. Here's an example:# Generate textprompt = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"response = arcee(prompt)Additional parameters​Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s) to aid text generation. Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Generate text with filters and size paramsresponse = arcee(prompt, size=5, filters=filters)PreviousAnyscaleNextAzure MLSetupAdditional ConfigurationGenerating TextAdditional parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).
This notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs). ->: Setting the model_kwargs at the object initialization uses the parameters as default for all the subsequent calls to the generate response.arcee = Arcee( model="DALM-Patent", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Generating Text​You can generate text from Arcee by providing a prompt. Here's an example:# Generate textprompt = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"response = arcee(prompt)Additional parameters​Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s) to aid text generation. Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Generate text with filters and size paramsresponse = arcee(prompt, size=5, filters=filters)PreviousAnyscaleNextAzure MLSetupAdditional ConfigurationGenerating TextAdditional parametersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,588
KoboldAI API | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsKoboldAI APIKoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.This example goes over how to use LangChain with that API.Documentation can be found in the browser adding /api to the end of your endpoint (i.e http://127.0.0.1/:5000/api).from langchain.llms import KoboldApiLLMReplace the endpoint seen below with the one shown in the output after starting the webui with --api or --public-apiOptionally, you can pass in parameters like temperature or max_lengthllm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80)response = llm("### Instruction:\nWhat is the first book of the bible?\n### Response:")PreviousJSONFormerNextLlama.cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
KoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.
KoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain. ->: KoboldAI API | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsKoboldAI APIKoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.This example goes over how to use LangChain with that API.Documentation can be found in the browser adding /api to the end of your endpoint (i.e http://127.0.0.1/:5000/api).from langchain.llms import KoboldApiLLMReplace the endpoint seen below with the one shown in the output after starting the webui with --api or --public-apiOptionally, you can pass in parameters like temperature or max_lengthllm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80)response = llm("### Instruction:\nWhat is the first book of the bible?\n### Response:")PreviousJSONFormerNextLlama.cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,589
Javelin AI Gateway Tutorial | 🦜️🔗 Langchain
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: Javelin AI Gateway Tutorial | 🦜️🔗 Langchain
1,590
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsJavelin AI Gateway TutorialOn this pageJavelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. The Javelin AI Gateway facilitates the utilization of large language models (LLMs) like OpenAI, Cohere, Anthropic, and others by providing a secure and unified endpoint. The gateway itself provides a centralized mechanism to roll out models systematically,
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsJavelin AI Gateway TutorialOn this pageJavelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. The Javelin AI Gateway facilitates the utilization of large language models (LLMs) like OpenAI, Cohere, Anthropic, and others by providing a secure and unified endpoint. The gateway itself provides a centralized mechanism to roll out models systematically,
1,591
provide access security, policy & cost guardrails for enterprises, etc., For a complete listing of all the features & benefits of Javelin, please visit www.getjavelin.ioStep 1: Introduction‚ÄãThe Javelin AI Gateway is an enterprise-grade API Gateway for AI applications. It integrates robust access security, ensuring secure interactions with large language models. Learn more in the official documentation.Step 2: Installation‚ÄãBefore we begin, we must install the javelin_sdk and set up the Javelin API key as an environment variable. pip install 'javelin_sdk' Requirement already satisfied: javelin_sdk in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (0.1.8) Requirement already satisfied: httpx<0.25.0,>=0.24.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (0.24.1) Requirement already satisfied: pydantic<2.0.0,>=1.10.7 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (1.10.12) Requirement already satisfied: certifi in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (2023.5.7) Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (0.17.3) Requirement already satisfied: idna in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (3.4) Requirement already satisfied: sniffio in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (1.3.0) Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from pydantic<2.0.0,>=1.10.7->javelin_sdk) (4.7.1) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: provide access security, policy & cost guardrails for enterprises, etc., For a complete listing of all the features & benefits of Javelin, please visit www.getjavelin.ioStep 1: Introduction‚ÄãThe Javelin AI Gateway is an enterprise-grade API Gateway for AI applications. It integrates robust access security, ensuring secure interactions with large language models. Learn more in the official documentation.Step 2: Installation‚ÄãBefore we begin, we must install the javelin_sdk and set up the Javelin API key as an environment variable. pip install 'javelin_sdk' Requirement already satisfied: javelin_sdk in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (0.1.8) Requirement already satisfied: httpx<0.25.0,>=0.24.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (0.24.1) Requirement already satisfied: pydantic<2.0.0,>=1.10.7 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (1.10.12) Requirement already satisfied: certifi in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (2023.5.7) Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (0.17.3) Requirement already satisfied: idna in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (3.4) Requirement already satisfied: sniffio in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (1.3.0) Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from pydantic<2.0.0,>=1.10.7->javelin_sdk) (4.7.1) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from
1,592
(from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (0.14.0) Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (3.7.1) Note: you may need to restart the kernel to use updated packages.Step 3: Completions Example‚ÄãThis section will demonstrate how to interact with the Javelin AI Gateway to get completions from a large language model. Here is a Python script that demonstrates this:
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (0.14.0) Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (3.7.1) Note: you may need to restart the kernel to use updated packages.Step 3: Completions Example‚ÄãThis section will demonstrate how to interact with the Javelin AI Gateway to get completions from a large language model. Here is a Python script that demonstrates this:
1,593
(note) assumes that you have setup a route in the gateway called 'eng_dept03'from langchain.chains import LLMChainfrom langchain.llms import JavelinAIGatewayfrom langchain.prompts import PromptTemplateroute_completions = "eng_dept03"gateway = JavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route=route_completions, model_name="text-davinci-003",)prompt = PromptTemplate("Translate the following English text to French: {text}")llmchain = LLMChain(llm=gateway, prompt=prompt)result = llmchain.run("podcast player")print(result) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[6], line 2 1 from langchain.chains import LLMChain ----> 2 from langchain.llms import JavelinAIGateway 3 from langchain.prompts import PromptTemplate 5 route_completions = "eng_dept03" ImportError: cannot import name 'JavelinAIGateway' from 'langchain.llms' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/llms/__init__.py)Step 4: Embeddings ExampleThis section demonstrates how to use the Javelin AI Gateway to obtain embeddings for text queries and documents. Here is a Python script that illustrates this:
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: (note) assumes that you have setup a route in the gateway called 'eng_dept03'from langchain.chains import LLMChainfrom langchain.llms import JavelinAIGatewayfrom langchain.prompts import PromptTemplateroute_completions = "eng_dept03"gateway = JavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route=route_completions, model_name="text-davinci-003",)prompt = PromptTemplate("Translate the following English text to French: {text}")llmchain = LLMChain(llm=gateway, prompt=prompt)result = llmchain.run("podcast player")print(result) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[6], line 2 1 from langchain.chains import LLMChain ----> 2 from langchain.llms import JavelinAIGateway 3 from langchain.prompts import PromptTemplate 5 route_completions = "eng_dept03" ImportError: cannot import name 'JavelinAIGateway' from 'langchain.llms' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/llms/__init__.py)Step 4: Embeddings ExampleThis section demonstrates how to use the Javelin AI Gateway to obtain embeddings for text queries and documents. Here is a Python script that illustrates this:
1,594
(note) assumes that you have setup a route in the gateway called 'embeddings'from langchain.embeddings import JavelinAIGatewayEmbeddingsfrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = JavelinAIGatewayEmbeddings( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="embeddings",)print(embeddings.embed_query("hello"))print(embeddings.embed_documents(["hello"])) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[9], line 1 ----> 1 from langchain.embeddings import JavelinAIGatewayEmbeddings 2 from langchain.embeddings.openai import OpenAIEmbeddings 4 embeddings = JavelinAIGatewayEmbeddings( 5 gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin 6 route="embeddings", 7 ) ImportError: cannot import name 'JavelinAIGatewayEmbeddings' from 'langchain.embeddings' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/embeddings/__init__.py)Step 5: Chat ExampleThis section illustrates how to interact with the Javelin AI Gateway to facilitate a chat with a large language model. Here is a Python script that demonstrates this:
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: (note) assumes that you have setup a route in the gateway called 'embeddings'from langchain.embeddings import JavelinAIGatewayEmbeddingsfrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = JavelinAIGatewayEmbeddings( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="embeddings",)print(embeddings.embed_query("hello"))print(embeddings.embed_documents(["hello"])) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[9], line 1 ----> 1 from langchain.embeddings import JavelinAIGatewayEmbeddings 2 from langchain.embeddings.openai import OpenAIEmbeddings 4 embeddings = JavelinAIGatewayEmbeddings( 5 gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin 6 route="embeddings", 7 ) ImportError: cannot import name 'JavelinAIGatewayEmbeddings' from 'langchain.embeddings' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/embeddings/__init__.py)Step 5: Chat ExampleThis section illustrates how to interact with the Javelin AI Gateway to facilitate a chat with a large language model. Here is a Python script that demonstrates this:
1,595
(note) assumes that you have setup a route in the gateway called 'mychatbot_route'from langchain.chat_models import ChatJavelinAIGatewayfrom langchain.schema import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Artificial Intelligence has the power to transform humanity and make the world a better place" ),]chat = ChatJavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="mychatbot_route", model_name="gpt-3.5-turbo", params={ "temperature": 0.1 })print(chat(messages)) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[8], line 1 ----> 1 from langchain.chat_models import ChatJavelinAIGateway 2 from langchain.schema import HumanMessage, SystemMessage 4 messages = [ 5 SystemMessage( 6 content="You are a helpful assistant that translates English to French." (...) 10 ), 11 ] ImportError: cannot import name 'ChatJavelinAIGateway' from 'langchain.chat_models' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/chat_models/__init__.py)Step 6: Conclusion This tutorial introduced the Javelin AI Gateway and demonstrated how to interact with it using the Python SDK. Remember to check the Javelin Python SDK for more examples and to explore the official documentation for additional details.Happy coding!PreviousHuggingface TextGen InferenceNextJSONFormerStep 1: IntroductionStep 2: InstallationStep 3: Completions ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. ->: (note) assumes that you have setup a route in the gateway called 'mychatbot_route'from langchain.chat_models import ChatJavelinAIGatewayfrom langchain.schema import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Artificial Intelligence has the power to transform humanity and make the world a better place" ),]chat = ChatJavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="mychatbot_route", model_name="gpt-3.5-turbo", params={ "temperature": 0.1 })print(chat(messages)) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[8], line 1 ----> 1 from langchain.chat_models import ChatJavelinAIGateway 2 from langchain.schema import HumanMessage, SystemMessage 4 messages = [ 5 SystemMessage( 6 content="You are a helpful assistant that translates English to French." (...) 10 ), 11 ] ImportError: cannot import name 'ChatJavelinAIGateway' from 'langchain.chat_models' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/chat_models/__init__.py)Step 6: Conclusion This tutorial introduced the Javelin AI Gateway and demonstrated how to interact with it using the Python SDK. Remember to check the Javelin Python SDK for more examples and to explore the official documentation for additional details.Happy coding!PreviousHuggingface TextGen InferenceNextJSONFormerStep 1: IntroductionStep 2: InstallationStep 3: Completions ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,596
Runhouse | 🦜️🔗 Langchain
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. ->: Runhouse | 🦜️🔗 Langchain
1,597
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsRunhouseRunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.Note: Code uses SelfHosted name instead of the Runhouse.pip install runhousefrom langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainimport runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsRunhouseRunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.Note: Code uses SelfHosted name instead of the Runhouse.pip install runhousefrom langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainimport runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the
1,598
cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='rh-a10x')template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = SelfHostedHuggingFaceLLM( model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"You can also load more custom models through the SelfHostedHuggingFaceLLM interface:llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu,)llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin'Using a custom load function, we can load a custom pipeline directly on the remote hardware:def load_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipedef inference_fn(pipeline, prompt, stop=None): return pipeline(prompt)[0]["generated_text"][len(prompt) :]llm = SelfHostedHuggingFaceLLM( model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 |
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. ->: cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='rh-a10x')template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = SelfHostedHuggingFaceLLM( model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"You can also load more custom models through the SelfHostedHuggingFaceLLM interface:llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu,)llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin'Using a custom load function, we can load a custom pipeline directly on the remote hardware:def load_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipedef inference_fn(pipeline, prompt, stop=None): return pipeline(prompt)[0]["generated_text"][len(prompt) :]llm = SelfHostedHuggingFaceLLM( model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 |
1,599
president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush'You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:pipeline = load_pipeline()llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs)Instead, we can also send it to the hardware's filesystem, which will be much faster.rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to( gpu, path="models")llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)PreviousReplicateNextSageMakerEndpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. ->: president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush'You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:pipeline = load_pipeline()llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs)Instead, we can also send it to the hardware's filesystem, which will be much faster.rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to( gpu, path="models")llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)PreviousReplicateNextSageMakerEndpointCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.