Unnamed: 0
int64
0
4.66k
page content
stringlengths
23
2k
description
stringlengths
8
925
output
stringlengths
38
2.93k
1,600
Gradient | 🦜️🔗 Langchain
Gradient allows to fine tune and get completions on LLMs with a simple web API.
Gradient allows to fine tune and get completions on LLMs with a simple web API. ->: Gradient | 🦜️🔗 Langchain
1,601
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGradientOn this pageGradientGradient allows to fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Gradient.Imports​from langchain.llms import GradientLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassimport osif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace
Gradient allows to fine tune and get completions on LLMs with a simple web API.
Gradient allows to fine tune and get completions on LLMs with a simple web API. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGradientOn this pageGradientGradient allows to fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Gradient.Imports​from langchain.llms import GradientLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassimport osif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace
1,602
= getpass("gradient.ai workspace id:")Optional: Validate your Enviroment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package.pip install gradientai Requirement already satisfied: gradientai in /home/michi/.venv/lib/python3.10/site-packages (1.0.0) Requirement already satisfied: aenum>=3.1.11 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (3.1.15) Requirement already satisfied: pydantic<2.0.0,>=1.10.5 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.10.12) Requirement already satisfied: python-dateutil>=2.8.2 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (2.8.2) Requirement already satisfied: urllib3>=1.25.3 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.26.16) Requirement already satisfied: typing-extensions>=4.2.0 in /home/michi/.venv/lib/python3.10/site-packages (from pydantic<2.0.0,>=1.10.5->gradientai) (4.5.0) Requirement already satisfied: six>=1.5 in /home/michi/.venv/lib/python3.10/site-packages (from python-dateutil>=2.8.2->gradientai) (1.16.0)import gradientaiclient = gradientai.Gradient()models = client.list_models(only_base=True)for model in models: print(model.id) 99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model cc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_modelnew_model = models[-1].create_model_adapter(name="my_model_adapter")new_model.id, new_model.name ('674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter', 'my_model_adapter')Create the Gradient instance‚ÄãYou can specify different parameters such as the model, max_tokens generated, temperature, etc.As we later want to fine-tune out model, we select the model_adapter with the id 674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter, but you can use any base or fine-tunable model.llm = GradientLLM( # `ID` listed in `$ gradient model list`
Gradient allows to fine tune and get completions on LLMs with a simple web API.
Gradient allows to fine tune and get completions on LLMs with a simple web API. ->: = getpass("gradient.ai workspace id:")Optional: Validate your Enviroment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package.pip install gradientai Requirement already satisfied: gradientai in /home/michi/.venv/lib/python3.10/site-packages (1.0.0) Requirement already satisfied: aenum>=3.1.11 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (3.1.15) Requirement already satisfied: pydantic<2.0.0,>=1.10.5 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.10.12) Requirement already satisfied: python-dateutil>=2.8.2 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (2.8.2) Requirement already satisfied: urllib3>=1.25.3 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.26.16) Requirement already satisfied: typing-extensions>=4.2.0 in /home/michi/.venv/lib/python3.10/site-packages (from pydantic<2.0.0,>=1.10.5->gradientai) (4.5.0) Requirement already satisfied: six>=1.5 in /home/michi/.venv/lib/python3.10/site-packages (from python-dateutil>=2.8.2->gradientai) (1.16.0)import gradientaiclient = gradientai.Gradient()models = client.list_models(only_base=True)for model in models: print(model.id) 99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model cc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_modelnew_model = models[-1].create_model_adapter(name="my_model_adapter")new_model.id, new_model.name ('674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter', 'my_model_adapter')Create the Gradient instance‚ÄãYou can specify different parameters such as the model, max_tokens generated, temperature, etc.As we later want to fine-tune out model, we select the model_adapter with the id 674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter, but you can use any base or fine-tunable model.llm = GradientLLM( # `ID` listed in `$ gradient model list`
1,603
# `ID` listed in `$ gradient model list` model="674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter", # # optional: set new credentials, they default to environment variables # gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"], # gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"], model_kwargs=dict(max_generated_token_count=128))Create a Prompt Template‚ÄãWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: """prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain‚Äãllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain‚ÄãProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in 1994?"llm_chain.run( question=question) '\nThe San Francisco 49ers won the Super Bowl in 1994.'Improve the results by fine-tuning (optional)Well - that is wrong - the San Francisco 49ers did not win.
Gradient allows to fine tune and get completions on LLMs with a simple web API.
Gradient allows to fine tune and get completions on LLMs with a simple web API. ->: # `ID` listed in `$ gradient model list` model="674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter", # # optional: set new credentials, they default to environment variables # gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"], # gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"], model_kwargs=dict(max_generated_token_count=128))Create a Prompt Template‚ÄãWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: """prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain‚Äãllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain‚ÄãProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in 1994?"llm_chain.run( question=question) '\nThe San Francisco 49ers won the Super Bowl in 1994.'Improve the results by fine-tuning (optional)Well - that is wrong - the San Francisco 49ers did not win.
1,604
The correct answer to the question would be The Dallas Cowboys!.Let's increase the odds for the correct answer, by fine-tuning on the correct answer using the PromptTemplate.dataset = [{"inputs": template.format(question="What NFL team won the Super Bowl in 1994?") + " The Dallas Cowboys!"}]dataset [{'inputs': 'Question: What NFL team won the Super Bowl in 1994?\n\nAnswer: The Dallas Cowboys!'}]new_model.fine_tune( samples=dataset) FineTuneResponse(number_of_trainable_tokens=27, sum_loss=78.17996)# we can keep the llm_chain, as the registered model just got refreshed on the gradient.ai servers.llm_chain.run( question=question) 'The Dallas Cowboys'PreviousGPT4AllNextHugging Face HubImportsSet the Environment API KeyCreate the Gradient instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Gradient allows to fine tune and get completions on LLMs with a simple web API.
Gradient allows to fine tune and get completions on LLMs with a simple web API. ->: The correct answer to the question would be The Dallas Cowboys!.Let's increase the odds for the correct answer, by fine-tuning on the correct answer using the PromptTemplate.dataset = [{"inputs": template.format(question="What NFL team won the Super Bowl in 1994?") + " The Dallas Cowboys!"}]dataset [{'inputs': 'Question: What NFL team won the Super Bowl in 1994?\n\nAnswer: The Dallas Cowboys!'}]new_model.fine_tune( samples=dataset) FineTuneResponse(number_of_trainable_tokens=27, sum_loss=78.17996)# we can keep the llm_chain, as the registered model just got refreshed on the gradient.ai servers.llm_chain.run( question=question) 'The Dallas Cowboys'PreviousGPT4AllNextHugging Face HubImportsSet the Environment API KeyCreate the Gradient instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,605
GCP Vertex AI | 🦜️🔗 Langchain
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. ->: GCP Vertex AI | 🦜️🔗 Langchain
1,606
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGCP Vertex AIOn this pageGCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. Setting up​By default, Google Cloud does not use customer data to train its foundation models as part of Google Cloud's AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see:
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGCP Vertex AIOn this pageGCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. Setting up​By default, Google Cloud does not use customer data to train its foundation models as part of Google Cloud's AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see:
1,607
for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.llms import VertexAIllm = VertexAI()print(llm("What are some of the pros and cons of Python as a programming language?")) Python is a widely used, interpreted, object-oriented, and high-level programming language with dynamic semantics, used for general-purpose programming. It is known for its readability, simplicity, and versatility. Here are some of the pros and cons of Python: **Pros:** - **Easy to learn:** Python is known for its simple and intuitive syntax, making it easy for beginners to learn. It has a relatively shallow learning curve compared to other programming languages. - **Versatile:** Python is a general-purpose programming language, meaning it can be used for a wide variety of tasks, including web development, data science, machineUsing in a chain‚Äãfrom langchain.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)chain = prompt | llmquestion = "Who was the president in the year Justin Beiber was born?"print(chain.invoke({"question": question})) Justin Bieber was born on March 1, 1994. Bill Clinton was the president of the United States from January 20, 1993, to January 20, 2001. The final answer is Bill ClintonCode generation example‚ÄãYou can now leverage the Codey API for code generation within Vertex AI. The model names are:code-bison: for code suggestioncode-gecko: for code completionllm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)question = "Write a python function that checks if a string is a valid email address"print(llm(question)) ```python import re def is_valid_email(email): pattern
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. ->: for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.llms import VertexAIllm = VertexAI()print(llm("What are some of the pros and cons of Python as a programming language?")) Python is a widely used, interpreted, object-oriented, and high-level programming language with dynamic semantics, used for general-purpose programming. It is known for its readability, simplicity, and versatility. Here are some of the pros and cons of Python: **Pros:** - **Easy to learn:** Python is known for its simple and intuitive syntax, making it easy for beginners to learn. It has a relatively shallow learning curve compared to other programming languages. - **Versatile:** Python is a general-purpose programming language, meaning it can be used for a wide variety of tasks, including web development, data science, machineUsing in a chain‚Äãfrom langchain.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)chain = prompt | llmquestion = "Who was the president in the year Justin Beiber was born?"print(chain.invoke({"question": question})) Justin Bieber was born on March 1, 1994. Bill Clinton was the president of the United States from January 20, 1993, to January 20, 2001. The final answer is Bill ClintonCode generation example‚ÄãYou can now leverage the Codey API for code generation within Vertex AI. The model names are:code-bison: for code suggestioncode-gecko: for code completionllm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)question = "Write a python function that checks if a string is a valid email address"print(llm(question)) ```python import re def is_valid_email(email): pattern
1,608
def is_valid_email(email): pattern = re.compile(r"[^@]+@[^@]+\.[^@]+") return pattern.match(email) ```Full generation info‚ÄãWe can use the generate method to get back extra metadata like safety attributes and not just text completionsresult = llm.generate([question])result.generations [[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]]Asynchronous calls‚ÄãWith agenerate we can make asynchronous calls# If running in a Jupyter notebook you'll need to install nest_asyncio# !pip install nest_asyncioimport asyncio# import nest_asyncio# nest_asyncio.apply()asyncio.run(llm.agenerate([question])) LLMResult(generations=[[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]], llm_output=None, run=[RunInfo(run_id=UUID('caf74e91-aefb-48ac-8031-0c505fcbbcc6'))])Streaming calls‚ÄãWith stream we can stream results from the modelimport sysfor chunk in llm.stream(question): sys.stdout.write(chunk) sys.stdout.flush() ```python import re def is_valid_email(email): """ Checks if a string is a valid email address. Args: email: The string to check. Returns: True if the string is a valid email address, False otherwise. """ # Check for a valid email address format. if not re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]*$", email): return False # Check if the domain name exists. try: domain = email.split("@")[1] socket.gethostbyname(domain) except socket.gaierror: return False return True ```Vertex Model Garden‚ÄãVertex Model Garden exposes
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. ->: def is_valid_email(email): pattern = re.compile(r"[^@]+@[^@]+\.[^@]+") return pattern.match(email) ```Full generation info‚ÄãWe can use the generate method to get back extra metadata like safety attributes and not just text completionsresult = llm.generate([question])result.generations [[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]]Asynchronous calls‚ÄãWith agenerate we can make asynchronous calls# If running in a Jupyter notebook you'll need to install nest_asyncio# !pip install nest_asyncioimport asyncio# import nest_asyncio# nest_asyncio.apply()asyncio.run(llm.agenerate([question])) LLMResult(generations=[[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]], llm_output=None, run=[RunInfo(run_id=UUID('caf74e91-aefb-48ac-8031-0c505fcbbcc6'))])Streaming calls‚ÄãWith stream we can stream results from the modelimport sysfor chunk in llm.stream(question): sys.stdout.write(chunk) sys.stdout.flush() ```python import re def is_valid_email(email): """ Checks if a string is a valid email address. Args: email: The string to check. Returns: True if the string is a valid email address, False otherwise. """ # Check for a valid email address format. if not re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]*$", email): return False # Check if the domain name exists. try: domain = email.split("@")[1] socket.gethostbyname(domain) except socket.gaierror: return False return True ```Vertex Model Garden‚ÄãVertex Model Garden exposes
1,609
Model Garden​Vertex Model Garden exposes open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI endpoint in the console or via API.from langchain.llms import VertexAIModelGardenllm = VertexAIModelGarden( project="YOUR PROJECT", endpoint_id="YOUR ENDPOINT_ID")print(llm("What is the meaning of life?"))Like all LLMs, we can then compose it with other components:prompt = PromptTemplate.from_template("What is the meaning of {thing}?")chian = prompt | llmprint(chain.invoke({"thing": "life"}))PreviousForefrontAINextGooseAISetting upUsing in a chainCode generation exampleFull generation infoAsynchronous callsStreaming callsVertex Model GardenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.
Note: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. ->: Model Garden​Vertex Model Garden exposes open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI endpoint in the console or via API.from langchain.llms import VertexAIModelGardenllm = VertexAIModelGarden( project="YOUR PROJECT", endpoint_id="YOUR ENDPOINT_ID")print(llm("What is the meaning of life?"))Like all LLMs, we can then compose it with other components:prompt = PromptTemplate.from_template("What is the meaning of {thing}?")chian = prompt | llmprint(chain.invoke({"thing": "life"}))PreviousForefrontAINextGooseAISetting upUsing in a chainCode generation exampleFull generation infoAsynchronous callsStreaming callsVertex Model GardenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,610
Tongyi Qwen | 🦜️🔗 Langchain
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: Tongyi Qwen | 🦜️🔗 Langchain
1,611
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTongyi QwenTongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.# Install the packagepip install dashscope# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass() ········import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.llms import Tongyifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template,
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTongyi QwenTongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.# Install the packagepip install dashscope# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass() ········import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.llms import Tongyifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template,
1,612
= PromptTemplate(template=template, input_variables=["question"])llm = Tongyi()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "The year Justin Bieber was born was 1994. The Denver Broncos won the Super Bowl in 1997, which means they would have been the team that won the Super Bowl during Justin Bieber's birth year. So the answer is the Denver Broncos."PreviousTogether AINextvLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.
Tongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations. ->: = PromptTemplate(template=template, input_variables=["question"])llm = Tongyi()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "The year Justin Bieber was born was 1994. The Denver Broncos won the Super Bowl in 1997, which means they would have been the team that won the Super Bowl during Justin Bieber's birth year. So the answer is the Denver Broncos."PreviousTogether AINextvLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,613
CTranslate2 | 🦜️🔗 Langchain
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models. ->: CTranslate2 | 🦜️🔗 Langchain
1,614
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCTranslate2On this pageCTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.Full list of features and supported models is included in the project's repository. To start, please check out the official quickstart guide.To use, you should have ctranslate2 python package installed.#!pip install ctranslate2To use a Hugging Face model with CTranslate2, it has to be first converted to CTranslate2 format using the ct2-transformers-converter command. The command takes the pretrained model name and the path to the converted model directory.# conversation can take several minutesct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCTranslate2On this pageCTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.Full list of features and supported models is included in the project's repository. To start, please check out the official quickstart guide.To use, you should have ctranslate2 python package installed.#!pip install ctranslate2To use a Hugging Face model with CTranslate2, it has to be first converted to CTranslate2 format using the ct2-transformers-converter command. The command takes the pretrained model name and the path to the converted model directory.# conversation can take several minutesct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization
1,615
--model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force Loading checkpoint shards: 100%|██████████████████| 2/2 [00:01<00:00, 1.81it/s]from langchain.llms import CTranslate2llm = CTranslate2( # output_dir from above: model_path="./llama-2-7b-ct2", tokenizer_name="meta-llama/Llama-2-7b-hf", device="cuda", # device_index can be either single int or list or ints, # indicating the ids of GPUs to use for inference: device_index=[0,1], compute_type="bfloat16")Single call​print( llm( "He presented me with plausible evidence for the existence of unicorns: ", max_length=256, sampling_topk=50, sampling_temperature=0.2, repetition_penalty=2, cache_static_prompt=False, )) He presented me with plausible evidence for the existence of unicorns: 1) they are mentioned in ancient texts; and, more importantly to him (and not so much as a matter that would convince most people), he had seen one. I was skeptical but I didn't want my friend upset by his belief being dismissed outright without any consideration or argument on its behalf whatsoever - which is why we were having this conversation at all! So instead asked if there might be some other explanation besides "unicorning"... maybe it could have been an ostrich? Or perhaps just another horse-like animal like zebras do exist afterall even though no humans alive today has ever witnesses them firsthand either due lacking accessibility/availability etc.. But then again those animals aren’ t exactly known around here anyway…” And thus began our discussion about whether these creatures actually existed anywhere else outside Earth itself where only few scientists ventured before us nowadays because technology allows exploration beyond borders once thought impossible centuries ago when travel meant walking everywhere yourself until reaching destination point A->B via
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models. ->: --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force Loading checkpoint shards: 100%|██████████████████| 2/2 [00:01<00:00, 1.81it/s]from langchain.llms import CTranslate2llm = CTranslate2( # output_dir from above: model_path="./llama-2-7b-ct2", tokenizer_name="meta-llama/Llama-2-7b-hf", device="cuda", # device_index can be either single int or list or ints, # indicating the ids of GPUs to use for inference: device_index=[0,1], compute_type="bfloat16")Single call​print( llm( "He presented me with plausible evidence for the existence of unicorns: ", max_length=256, sampling_topk=50, sampling_temperature=0.2, repetition_penalty=2, cache_static_prompt=False, )) He presented me with plausible evidence for the existence of unicorns: 1) they are mentioned in ancient texts; and, more importantly to him (and not so much as a matter that would convince most people), he had seen one. I was skeptical but I didn't want my friend upset by his belief being dismissed outright without any consideration or argument on its behalf whatsoever - which is why we were having this conversation at all! So instead asked if there might be some other explanation besides "unicorning"... maybe it could have been an ostrich? Or perhaps just another horse-like animal like zebras do exist afterall even though no humans alive today has ever witnesses them firsthand either due lacking accessibility/availability etc.. But then again those animals aren’ t exactly known around here anyway…” And thus began our discussion about whether these creatures actually existed anywhere else outside Earth itself where only few scientists ventured before us nowadays because technology allows exploration beyond borders once thought impossible centuries ago when travel meant walking everywhere yourself until reaching destination point A->B via
1,616
until reaching destination point A->B via footsteps alone unless someone helped guide along way through woods full darkness nighttime hoursMultiple calls:​print( llm.generate( ["The list of top romantic songs:\n1.", "The list of top rap songs:\n1."], max_length=128 )) generations=[[Generation(text='The list of top romantic songs:\n1. “I Will Always Love You” by Whitney Houston\n2. “Can’t Help Falling in Love” by Elvis Presley\n3. “Unchained Melody” by The Righteous Brothers\n4. “I Will Always Love You” by Dolly Parton\n5. “I Will Always Love You” by Whitney Houston\n6. “I Will Always Love You” by Dolly Parton\n7. “I Will Always Love You” by The Beatles\n8. “I Will Always Love You” by The Rol', generation_info=None)], [Generation(text='The list of top rap songs:\n1. “God’s Plan” by Drake\n2. “Rockstar” by Post Malone\n3. “Bad and Boujee” by Migos\n4. “Humble” by Kendrick Lamar\n5. “Bodak Yellow” by Cardi B\n6. “I’m the One” by DJ Khaled\n7. “Motorsport” by Migos\n8. “No Limit” by G-Eazy\n9. “Bounce Back” by Big Sean\n10. “', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('628e0491-a310-4d12-81db-6f2c5309d5c2')), RunInfo(run_id=UUID('f88fdbcd-c1f6-4f13-b575-810b80ecbaaf'))]Integrate the model in an LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """{question}Let's think step by step. """prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Who was the US president in the year the first Pokemon game was released? Let's think step by step. 1996 was the year the first Pokemon game was released. \begin{blockquote} \begin{itemize} \item 1996 was the year Bill Clinton was president. \item 1996
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models. ->: until reaching destination point A->B via footsteps alone unless someone helped guide along way through woods full darkness nighttime hoursMultiple calls:​print( llm.generate( ["The list of top romantic songs:\n1.", "The list of top rap songs:\n1."], max_length=128 )) generations=[[Generation(text='The list of top romantic songs:\n1. “I Will Always Love You” by Whitney Houston\n2. “Can’t Help Falling in Love” by Elvis Presley\n3. “Unchained Melody” by The Righteous Brothers\n4. “I Will Always Love You” by Dolly Parton\n5. “I Will Always Love You” by Whitney Houston\n6. “I Will Always Love You” by Dolly Parton\n7. “I Will Always Love You” by The Beatles\n8. “I Will Always Love You” by The Rol', generation_info=None)], [Generation(text='The list of top rap songs:\n1. “God’s Plan” by Drake\n2. “Rockstar” by Post Malone\n3. “Bad and Boujee” by Migos\n4. “Humble” by Kendrick Lamar\n5. “Bodak Yellow” by Cardi B\n6. “I’m the One” by DJ Khaled\n7. “Motorsport” by Migos\n8. “No Limit” by G-Eazy\n9. “Bounce Back” by Big Sean\n10. “', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('628e0491-a310-4d12-81db-6f2c5309d5c2')), RunInfo(run_id=UUID('f88fdbcd-c1f6-4f13-b575-810b80ecbaaf'))]Integrate the model in an LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """{question}Let's think step by step. """prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Who was the US president in the year the first Pokemon game was released? Let's think step by step. 1996 was the year the first Pokemon game was released. \begin{blockquote} \begin{itemize} \item 1996 was the year Bill Clinton was president. \item 1996
1,617
year Bill Clinton was president. \item 1996 was the year the first Pokemon game was released. \item 1996 was the year the first Pokemon game was released. \end{itemize} \end{blockquote} I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. PreviousC TransformersNextDatabricksSingle callMultiple calls:Integrate the model in an LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 is a C++ and Python library for efficient inference with Transformer models. ->: year Bill Clinton was president. \item 1996 was the year the first Pokemon game was released. \item 1996 was the year the first Pokemon game was released. \end{itemize} \end{blockquote} I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. PreviousC TransformersNextDatabricksSingle callMultiple calls:Integrate the model in an LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,618
Azure OpenAI | 🦜️🔗 Langchain
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: Azure OpenAI | 🦜️🔗 Langchain
1,619
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure OpenAIOn this pageAzure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.The Azure OpenAI API is compatible with OpenAI's API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.API configuration​You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:# Set this to `azure`export OPENAI_API_TYPE=azure# The API version you want to use: set this to `2023-05-15` for the released version.export OPENAI_API_VERSION=2023-05-15# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure OpenAIOn this pageAzure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.The Azure OpenAI API is compatible with OpenAI's API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.API configuration​You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:# Set this to `azure`export OPENAI_API_TYPE=azure# The API version you want to use: set this to `2023-05-15` for the released version.export OPENAI_API_VERSION=2023-05-15# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export
1,620
portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure OpenAI API key>Alternatively, you can configure the API right within your running Python environment:import osos.environ["OPENAI_API_TYPE"] = "azure"Azure Active Directory Authentication‚ÄãThere are two ways you can authenticate to Azure OpenAI:API KeyAzure Active Directory (AAD)Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here.If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in.Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value.import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").tokenThe DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure OpenAI API key>Alternatively, you can configure the API right within your running Python environment:import osos.environ["OPENAI_API_TYPE"] = "azure"Azure Active Directory Authentication‚ÄãThere are two ways you can authenticate to Azure OpenAI:API KeyAzure Active Directory (AAD)Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here.If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in.Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value.import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").tokenThe DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then
1,621
shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential())Deployments‚ÄãWith Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.Let's say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:import openairesponse = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5)pip install openaiimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "..."os.environ["OPENAI_API_KEY"] = "..."# Import Azure OpenAIfrom langchain.llms import AzureOpenAI# Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}PreviousAzure MLNextBaidu QianfanAPI configurationAzure Active Directory
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential())Deployments‚ÄãWith Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.Let's say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:import openairesponse = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5)pip install openaiimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "..."os.environ["OPENAI_API_KEY"] = "..."# Import Azure OpenAIfrom langchain.llms import AzureOpenAI# Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}PreviousAzure MLNextBaidu QianfanAPI configurationAzure Active Directory
1,622
QianfanAPI configurationAzure Active Directory AuthenticationDeploymentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
This notebook goes over how to use Langchain with Azure OpenAI.
This notebook goes over how to use Langchain with Azure OpenAI. ->: QianfanAPI configurationAzure Active Directory AuthenticationDeploymentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,623
Writer | 🦜️🔗 Langchain
Writer is a platform to generate different language content.
Writer is a platform to generate different language content. ->: Writer | 🦜️🔗 Langchain
1,624
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsWriterWriterWriter is a platform to generate different language content.This example goes over how to use LangChain to interact with Writer models.You have to get the WRITER_API_KEY here.from getpass import getpassWRITER_API_KEY = getpass() ········import osos.environ["WRITER_API_KEY"] = WRITER_API_KEYfrom langchain.llms import Writerfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousvLLMNextXorbits Inference
Writer is a platform to generate different language content.
Writer is a platform to generate different language content. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsWriterWriterWriter is a platform to generate different language content.This example goes over how to use LangChain to interact with Writer models.You have to get the WRITER_API_KEY here.from getpass import getpassWRITER_API_KEY = getpass() ········import osos.environ["WRITER_API_KEY"] = WRITER_API_KEYfrom langchain.llms import Writerfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousvLLMNextXorbits Inference
1,625
Inference (Xinference)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Writer is a platform to generate different language content.
Writer is a platform to generate different language content. ->: Inference (Xinference)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,626
Azure ML | 🦜️🔗 Langchain
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: Azure ML | 🦜️🔗 Langchain
1,627
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure MLOn this pageAzure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.This notebook goes over how to use an LLM hosted on an AzureML online endpointfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpointSet up​To use the wrapper, you must deploy a model on AzureML and obtain the following parameters:endpoint_api_key: Required - The API key provided by the endpointendpoint_url: Required - The REST endpoint url provided by the endpointdeployment_name: Not required - The deployment name of the model using the endpointContent Formatter​The content_formatter parameter is a handler class for transforming the
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAzure MLOn this pageAzure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.This notebook goes over how to use an LLM hosted on an AzureML online endpointfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpointSet up​To use the wrapper, you must deploy a model on AzureML and obtain the following parameters:endpoint_api_key: Required - The API key provided by the endpointendpoint_url: Required - The REST endpoint url provided by the endpointdeployment_name: Not required - The deployment name of the model using the endpointContent Formatter​The content_formatter parameter is a handler class for transforming the
1,628
parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:GPT2ContentFormatter: Formats request and response data for GPT2DollyContentFormatter: Formats request and response data for the Dolly-v2HFContentFormatter: Formats request and response data for text-generation Hugging Face modelsLLamaContentFormatter: Formats request and response data for LLaMa2Note: OSSContentFormatter is being deprecated and replaced with GPT2ContentFormatter. The logic is the same but GPT2ContentFormatter is a more suitable name. You can still continue to use OSSContentFormatter as the changes are backwards compatible.Below is an example using a summarization model from Hugging Face.Custom Content Formatter‚Äãfrom typing import Dictfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpoint, ContentFormatterBaseimport osimport jsonclass CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_new_tokens": 400},
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:GPT2ContentFormatter: Formats request and response data for GPT2DollyContentFormatter: Formats request and response data for the Dolly-v2HFContentFormatter: Formats request and response data for text-generation Hugging Face modelsLLamaContentFormatter: Formats request and response data for LLaMa2Note: OSSContentFormatter is being deprecated and replaced with GPT2ContentFormatter. The logic is the same but GPT2ContentFormatter is a more suitable name. You can still continue to use OSSContentFormatter as the changes are backwards compatible.Below is an example using a summarization model from Hugging Face.Custom Content Formatter‚Äãfrom typing import Dictfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpoint, ContentFormatterBaseimport osimport jsonclass CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_new_tokens": 400},
1,629
0.8, "max_new_tokens": 400}, content_formatter=content_formatter,)large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42]On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47]On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and).[48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: 0.8, "max_new_tokens": 400}, content_formatter=content_formatter,)large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42]On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47]On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and).[48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at
1,630
On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]"""summarized_text = llm(large_text)print(summarized_text) HaSeul won her first music show trophy with "So What" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.Dolly with LLMChain‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.llms.azureml_endpoint import DollyContentFormatterfrom langchain.chains import LLMChainformatter_template = "Write a {word_count} word essay about {topic}."prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template)content_formatter = DollyContentFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter,)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({"word_count": 100, "topic": "how to make friends"})) Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]"""summarized_text = llm(large_text)print(summarized_text) HaSeul won her first music show trophy with "So What" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.Dolly with LLMChain‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.llms.azureml_endpoint import DollyContentFormatterfrom langchain.chains import LLMChainformatter_template = "Write a {word_count} word essay about {topic}."prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template)content_formatter = DollyContentFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter,)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({"word_count": 100, "topic": "how to make friends"})) Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build
1,631
they're coming from. Like minded people can build a tribe together.Serializing an LLM​You can also save and load LLM configurationsfrom langchain.llms.loading import load_llmfrom langchain.llms.azureml_endpoint import AzureMLEndpointClientsave_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, },)save_llm.save("azureml.json")loaded_llm = load_llm("azureml.json")print(loaded_llm) AzureMLOnlineEndpoint Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}PreviousArceeNextAzure OpenAISet upContent FormatterCustom Content FormatterDolly with LLMChainSerializing an LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.
Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML. ->: they're coming from. Like minded people can build a tribe together.Serializing an LLM​You can also save and load LLM configurationsfrom langchain.llms.loading import load_llmfrom langchain.llms.azureml_endpoint import AzureMLEndpointClientsave_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, },)save_llm.save("azureml.json")loaded_llm = load_llm("azureml.json")print(loaded_llm) AzureMLOnlineEndpoint Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}PreviousArceeNextAzure OpenAISet upContent FormatterCustom Content FormatterDolly with LLMChainSerializing an LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,632
Prediction Guard | 🦜️🔗 Langchain
Basic LLM usage
Basic LLM usage ->: Prediction Guard | 🦜️🔗 Langchain
1,633
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsPrediction GuardOn this pagePrediction Guardpip install predictionguard langchainimport osimport predictionguard as pgfrom langchain.llms import PredictionGuardfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainBasic LLM usage​# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows# you to access all the latest open access models (see https://docs.predictionguard.com)os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"pgllm = PredictionGuard(model="OpenAI-text-davinci-003")pgllm("Tell me a joke")Control the output structure/ type of LLMs​template = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle
Basic LLM usage
Basic LLM usage ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsPrediction GuardOn this pagePrediction Guardpip install predictionguard langchainimport osimport predictionguard as pgfrom langchain.llms import PredictionGuardfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainBasic LLM usage​# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows# you to access all the latest open access models (see https://docs.predictionguard.com)os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"pgllm = PredictionGuard(model="OpenAI-text-davinci-003")pgllm("Tell me a joke")Control the output structure/ type of LLMs​template = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle
1,634
🎉 We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!)Scent of The Month Box - $28 (NEW!)Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉Query: {query}Result: """prompt = PromptTemplate(template=template, input_variables=["query"])# Without "guarding" or controlling the output of the LLM.pgllm(prompt.format(query="What kind of post is this?"))# With "guarding" or controlling the output of the LLM. See the# Prediction Guard docs (https://docs.predictionguard.com) to learn how to# control the output with integer, float, boolean, JSON, and other types and# structures.pgllm = PredictionGuard( model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": ["product announcement", "apology", "relational"], },)pgllm(prompt.format(query="What kind of post is this?"))Chaining​pgllm = PredictionGuard(model="OpenAI-text-davinci-003")template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)template = """Write a {adjective} poem about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)llm_chain.predict(adjective="sad", subject="ducks")PreviousPredibaseNextPromptLayer OpenAIBasic LLM usageControl the output structure/ type of LLMsChainingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Basic LLM usage
Basic LLM usage ->: 🎉 We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!)Scent of The Month Box - $28 (NEW!)Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉Query: {query}Result: """prompt = PromptTemplate(template=template, input_variables=["query"])# Without "guarding" or controlling the output of the LLM.pgllm(prompt.format(query="What kind of post is this?"))# With "guarding" or controlling the output of the LLM. See the# Prediction Guard docs (https://docs.predictionguard.com) to learn how to# control the output with integer, float, boolean, JSON, and other types and# structures.pgllm = PredictionGuard( model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": ["product announcement", "apology", "relational"], },)pgllm(prompt.format(query="What kind of post is this?"))Chaining​pgllm = PredictionGuard(model="OpenAI-text-davinci-003")template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)template = """Write a {adjective} poem about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)llm_chain.predict(adjective="sad", subject="ducks")PreviousPredibaseNextPromptLayer OpenAIBasic LLM usageControl the output structure/ type of LLMsChainingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,635
Eden AI | 🦜️🔗 Langchain
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: Eden AI | 🦜️🔗 Langchain
1,636
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsEden AIOn this pageEden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsEden AIOn this pageEden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named
1,637
the key in directly via the edenai_api_key named parameter when initiating the EdenAI LLM class:from langchain.llms import EdenAIllm = EdenAI(edenai_api_key="...",provider="openai", temperature=0.2, max_tokens=250)Calling a model‚ÄãThe EdenAI API brings together various providers, each offering multiple models.To access a specific model, you can simply add 'model' during instantiation.For instance, let's explore the models provided by OpenAI, such as GPT3.5 text generation‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm=EdenAI(feature="text",provider="openai",model="text-davinci-003",temperature=0.2, max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt)image generation‚Äãimport base64from io import BytesIOfrom PIL import Imageimport jsondef print_base64_image(base64_string): # Decode the base64 string into binary data decoded_data = base64.b64decode(base64_string) # Create an in-memory stream to read the binary data image_stream = BytesIO(decoded_data) # Open the image using PIL image = Image.open(image_stream) # Display the image image.show()text2image = EdenAI( feature="image" , provider= "openai", resolution="512x512")image_output = text2image("A cat riding a motorcycle by Picasso")print_base64_image(image_output)text generation with callback‚Äãfrom langchain.llms import EdenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = EdenAI( callbacks=[StreamingStdOutCallbackHandler()], feature="text",provider="openai", temperature=0.2,max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""print(llm(prompt))Chaining Calls‚Äãfrom langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = EdenAI(feature="text",
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: the key in directly via the edenai_api_key named parameter when initiating the EdenAI LLM class:from langchain.llms import EdenAIllm = EdenAI(edenai_api_key="...",provider="openai", temperature=0.2, max_tokens=250)Calling a model‚ÄãThe EdenAI API brings together various providers, each offering multiple models.To access a specific model, you can simply add 'model' during instantiation.For instance, let's explore the models provided by OpenAI, such as GPT3.5 text generation‚Äãfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm=EdenAI(feature="text",provider="openai",model="text-davinci-003",temperature=0.2, max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt)image generation‚Äãimport base64from io import BytesIOfrom PIL import Imageimport jsondef print_base64_image(base64_string): # Decode the base64 string into binary data decoded_data = base64.b64decode(base64_string) # Create an in-memory stream to read the binary data image_stream = BytesIO(decoded_data) # Open the image using PIL image = Image.open(image_stream) # Display the image image.show()text2image = EdenAI( feature="image" , provider= "openai", resolution="512x512")image_output = text2image("A cat riding a motorcycle by Picasso")print_base64_image(image_output)text generation with callback‚Äãfrom langchain.llms import EdenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = EdenAI( callbacks=[StreamingStdOutCallbackHandler()], feature="text",provider="openai", temperature=0.2,max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""print(llm(prompt))Chaining Calls‚Äãfrom langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = EdenAI(feature="text",
1,638
import LLMChainllm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)text2image = EdenAI(feature="image", provider="openai", resolution="512x512")prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt)second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",)chain_two = LLMChain(llm=llm, prompt=second_prompt)third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three],verbose=True)output = overall_chain.run("hats")#print the imageprint_base64_image(output)PreviousDeepSparseNextFireworksCalling a modeltext generationimage generationtext generation with callbackChaining CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)
Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/) ->: import LLMChainllm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)text2image = EdenAI(feature="image", provider="openai", resolution="512x512")prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt)second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",)chain_two = LLMChain(llm=llm, prompt=second_prompt)third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three],verbose=True)output = overall_chain.run("hats")#print the imageprint_base64_image(output)PreviousDeepSparseNextFireworksCalling a modeltext generationimage generationtext generation with callbackChaining CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,639
ForefrontAI | 🦜️🔗 Langchain
The Forefront platform gives you the ability to fine-tune and use open-source large language models.
The Forefront platform gives you the ability to fine-tune and use open-source large language models. ->: ForefrontAI | 🦜️🔗 Langchain
1,640
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsForefrontAIOn this pageForefrontAIThe Forefront platform gives you the ability to fine-tune and use open-source large language models.This notebook goes over how to use Langchain with ForefrontAI.Imports​import osfrom langchain.llms import ForefrontAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.# get a new token: https://docs.forefront.ai/forefront/api-reference/authenticationfrom getpass import getpassFOREFRONTAI_API_KEY = getpass()os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEYCreate the ForefrontAI instance​You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template
The Forefront platform gives you the ability to fine-tune and use open-source large language models.
The Forefront platform gives you the ability to fine-tune and use open-source large language models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsForefrontAIOn this pageForefrontAIThe Forefront platform gives you the ability to fine-tune and use open-source large language models.This notebook goes over how to use Langchain with ForefrontAI.Imports​import osfrom langchain.llms import ForefrontAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.# get a new token: https://docs.forefront.ai/forefront/api-reference/authenticationfrom getpass import getpassFOREFRONTAI_API_KEY = getpass()os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEYCreate the ForefrontAI instance​You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template
1,641
Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousFireworksNextGCP Vertex AIImportsSet the Environment API KeyCreate the ForefrontAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Forefront platform gives you the ability to fine-tune and use open-source large language models.
The Forefront platform gives you the ability to fine-tune and use open-source large language models. ->: Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousFireworksNextGCP Vertex AIImportsSet the Environment API KeyCreate the ForefrontAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,642
StochasticAI | 🦜️🔗 Langchain
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production. ->: StochasticAI | 🦜️🔗 Langchain
1,643
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsStochasticAIStochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.This example goes over how to use LangChain to interact with StochasticAI models.You have to get the API_KEY and the API_URL here.from getpass import getpassSTOCHASTICAI_API_KEY = getpass() ········import osos.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEYYOUR_API_URL = getpass() ········from langchain.llms import StochasticAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = StochasticAI(api_url=YOUR_API_URL)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsStochasticAIStochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.This example goes over how to use LangChain to interact with StochasticAI models.You have to get the API_KEY and the API_URL here.from getpass import getpassSTOCHASTICAI_API_KEY = getpass() ········import osos.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEYYOUR_API_URL = getpass() ········from langchain.llms import StochasticAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = StochasticAI(api_url=YOUR_API_URL)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super
1,644
llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n"PreviousSageMakerEndpointNextNebula (Symbl.ai)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production. ->: llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n"PreviousSageMakerEndpointNextNebula (Symbl.ai)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,645
Hugging Face Hub | 🦜️🔗 Langchain
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. ->: Hugging Face Hub | 🦜️🔗 Langchain
1,646
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsHugging Face HubOn this pageHugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.This example showcases how to connect to the Hugging Face Hub and use different models.Installation and Setup​To use, you should have the huggingface_hub python package installed.pip install huggingface_hub# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-tokenfrom getpass import getpassHUGGINGFACEHUB_API_TOKEN = getpass() ········import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENPrepare Examples​from langchain.llms import HuggingFaceHubfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "Who won the FIFA World Cup in the year 1994? "template = """Question:
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsHugging Face HubOn this pageHugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.This example showcases how to connect to the Hugging Face Hub and use different models.Installation and Setup​To use, you should have the huggingface_hub python package installed.pip install huggingface_hub# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-tokenfrom getpass import getpassHUGGINGFACEHUB_API_TOKEN = getpass() ········import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENPrepare Examples​from langchain.llms import HuggingFaceHubfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "Who won the FIFA World Cup in the year 1994? "template = """Question:
1,647
Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Examples‚ÄãBelow are some examples of models you can access through the Hugging Face Hub integration.Flan, by Google‚Äãrepo_id = "google/flan-t5-xxl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) The FIFA World Cup was held in the year 1994. West Germany won the FIFA World Cup in 1994Dolly, by Databricks‚ÄãSee Databricks organization page for a list of available models.repo_id = "databricks/dolly-v2-3b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) First of all, the world cup was won by the Germany. Then the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994. Question: WhoCamel, by Writer‚ÄãSee Writer's organization page for a list of available models.repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))XGen, by Salesforce‚ÄãSee more information.repo_id = "Salesforce/xgen-7b-8k-base"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Falcon, by Technology Innovation Institute (TII)‚ÄãSee more information.repo_id = "tiiuae/falcon-40b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt,
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. ->: Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Examples‚ÄãBelow are some examples of models you can access through the Hugging Face Hub integration.Flan, by Google‚Äãrepo_id = "google/flan-t5-xxl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) The FIFA World Cup was held in the year 1994. West Germany won the FIFA World Cup in 1994Dolly, by Databricks‚ÄãSee Databricks organization page for a list of available models.repo_id = "databricks/dolly-v2-3b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) First of all, the world cup was won by the Germany. Then the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994. Question: WhoCamel, by Writer‚ÄãSee Writer's organization page for a list of available models.repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))XGen, by Salesforce‚ÄãSee more information.repo_id = "Salesforce/xgen-7b-8k-base"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Falcon, by Technology Innovation Institute (TII)‚ÄãSee more information.repo_id = "tiiuae/falcon-40b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt,
1,648
64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))InternLM-Chat, by Shanghai AI Laboratory​See more information.repo_id = "internlm/internlm-chat-7b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.8})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Qwen, by Alibaba Cloud​Tongyi Qianwen-7B (Qwen-7B) is a model with a scale of 7 billion parameters in the Tongyi Qianwen large model series developed by Alibaba Cloud. Qwen-7B is a large language model based on Transformer, which is trained on ultra-large-scale pre-training data.See more information on HuggingFace of on GitHub.See here a big example for LangChain integration and Qwen.repo_id = "Qwen/Qwen-7B"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))PreviousGradientNextHugging Face Local PipelinesInstallation and SetupPrepare ExamplesExamplesFlan, by GoogleDolly, by DatabricksCamel, by WriterXGen, by SalesforceFalcon, by Technology Innovation Institute (TII)InternLM-Chat, by Shanghai AI LaboratoryQwen, by Alibaba CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. ->: 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))InternLM-Chat, by Shanghai AI Laboratory​See more information.repo_id = "internlm/internlm-chat-7b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.8})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Qwen, by Alibaba Cloud​Tongyi Qianwen-7B (Qwen-7B) is a model with a scale of 7 billion parameters in the Tongyi Qianwen large model series developed by Alibaba Cloud. Qwen-7B is a large language model based on Transformer, which is trained on ultra-large-scale pre-training data.See more information on HuggingFace of on GitHub.See here a big example for LangChain integration and Qwen.repo_id = "Qwen/Qwen-7B"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))PreviousGradientNextHugging Face Local PipelinesInstallation and SetupPrepare ExamplesExamplesFlan, by GoogleDolly, by DatabricksCamel, by WriterXGen, by SalesforceFalcon, by Technology Innovation Institute (TII)InternLM-Chat, by Shanghai AI LaboratoryQwen, by Alibaba CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,649
Anyscale | 🦜️🔗 Langchain
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications ->: Anyscale | 🦜️🔗 Langchain
1,650
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAnyscaleAnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applicationsThis example goes over how to use LangChain to interact with Anyscale Endpoint. import osos.environ["ANYSCALE_API_BASE"] = ANYSCALE_API_BASEos.environ["ANYSCALE_API_KEY"] = ANYSCALE_API_KEYfrom langchain.llms import Anyscalefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Anyscale(model_name=ANYSCALE_MODEL_NAME)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "When was George Washington president?"llm_chain.run(question)With Ray, we can distribute the queries without asynchronized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsAnyscaleAnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applicationsThis example goes over how to use LangChain to interact with Anyscale Endpoint. import osos.environ["ANYSCALE_API_BASE"] = ANYSCALE_API_BASEos.environ["ANYSCALE_API_KEY"] = ANYSCALE_API_KEYfrom langchain.llms import Anyscalefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Anyscale(model_name=ANYSCALE_MODEL_NAME)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "When was George Washington president?"llm_chain.run(question)With Ray, we can distribute the queries without asynchronized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or
1,651
Langchain LLM models which do not have _acall or _agenerate implementedprompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.",]import [email protected](num_cpus=0.1)def send_query(llm, prompt): resp = llm(prompt) return respfutures = [send_query.remote(llm, prompt) for prompt in prompt_list]results = ray.get(futures)PreviousAmazon API GatewayNextArceeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications
Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications ->: Langchain LLM models which do not have _acall or _agenerate implementedprompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.",]import [email protected](num_cpus=0.1)def send_query(llm, prompt): resp = llm(prompt) return respfutures = [send_query.remote(llm, prompt) for prompt in prompt_list]results = ray.get(futures)PreviousAmazon API GatewayNextArceeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc.
1,652
GooseAI | 🦜️🔗 Langchain
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. ->: GooseAI | 🦜️🔗 Langchain
1,653
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGooseAIOn this pageGooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.This notebook goes over how to use Langchain with GooseAI.Install openai​The openai package is required to use the GooseAI API. Install openai using pip3 install openai.$ pip3 install openaiImports​import osfrom langchain.llms import GooseAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.from getpass import getpassGOOSEAI_API_KEY = getpass()os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEYCreate the GooseAI instance​You can specify different parameters such as the model name, max tokens generated, temperature, etc.llm = GooseAI()Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question:
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsGooseAIOn this pageGooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.This notebook goes over how to use Langchain with GooseAI.Install openai​The openai package is required to use the GooseAI API. Install openai using pip3 install openai.$ pip3 install openaiImports​import osfrom langchain.llms import GooseAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.from getpass import getpassGOOSEAI_API_KEY = getpass()os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEYCreate the GooseAI instance​You can specify different parameters such as the model name, max tokens generated, temperature, etc.llm = GooseAI()Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question:
1,654
for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousGCP Vertex AINextGPT4AllInstall openaiImportsSet the Environment API KeyCreate the GooseAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. ->: for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousGCP Vertex AINextGPT4AllInstall openaiImportsSet the Environment API KeyCreate the GooseAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,655
CerebriumAI | 🦜️🔗 Langchain
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. ->: CerebriumAI | 🦜️🔗 Langchain
1,656
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCerebriumAIOn this pageCerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.This notebook goes over how to use Langchain with CerebriumAI.Install cerebrium​The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.# Install the packagepip3 install cerebriumImports​import osfrom langchain.llms import CerebriumAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"Create the CerebriumAI instance​You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsCerebriumAIOn this pageCerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.This notebook goes over how to use Langchain with CerebriumAI.Install cerebrium​The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.# Install the packagepip3 install cerebriumImports​import osfrom langchain.llms import CerebriumAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"Create the CerebriumAI instance​You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL
1,657
= CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBittensorNextChatGLMInstall cerebriumImportsSet the Environment API KeyCreate the CerebriumAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. ->: = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBittensorNextChatGLMInstall cerebriumImportsSet the Environment API KeyCreate the CerebriumAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,658
C Transformers | 🦜️🔗 Langchain
The C Transformers library provides Python bindings for GGML models.
The C Transformers library provides Python bindings for GGML models. ->: C Transformers | 🦜️🔗 Langchain
1,659
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsC TransformersC TransformersThe C Transformers library provides Python bindings for GGML models.This example goes over how to use LangChain to interact with C Transformers models.Install%pip install ctransformersLoad Modelfrom langchain.llms import CTransformersllm = CTransformers(model="marella/gpt-2-ggml")Generate Textprint(llm("AI is going to"))Streamingfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()])response = llm("AI is going to")LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer:"""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is
The C Transformers library provides Python bindings for GGML models.
The C Transformers library provides Python bindings for GGML models. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsC TransformersC TransformersThe C Transformers library provides Python bindings for GGML models.This example goes over how to use LangChain to interact with C Transformers models.Install%pip install ctransformersLoad Modelfrom langchain.llms import CTransformersllm = CTransformers(model="marella/gpt-2-ggml")Generate Textprint(llm("AI is going to"))Streamingfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()])response = llm("AI is going to")LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer:"""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is
1,660
llm=llm)response = llm_chain.run("What is AI?")PreviousCohereNextCTranslate2CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The C Transformers library provides Python bindings for GGML models.
The C Transformers library provides Python bindings for GGML models. ->: llm=llm)response = llm_chain.run("What is AI?")PreviousCohereNextCTranslate2CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,661
Together AI | 🦜️🔗 Langchain
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai ->: Together AI | 🦜️🔗 Langchain
1,662
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTogether AITogether AIThe Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more: https://together.aiTo use, you'll need an API key which you can find here: https://api.together.xyz/settings/api-keys. This can be passed in as init param
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsTogether AITogether AIThe Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more: https://together.aiTo use, you'll need an API key which you can find here: https://api.together.xyz/settings/api-keys. This can be passed in as init param
1,663
together_api_key or set as environment variable TOGETHER_API_KEY.Together API reference: https://docs.together.ai/reference/inferencefrom langchain.llms import Togetherllm = Together( model="togethercomputer/RedPajama-INCITE-7B-Base", temperature=0.7, max_tokens=128, top_k=1, # together_api_key="...")input_ = """You are a teacher with a deep knowledge of machine learning and AI. \You provide succinct and accurate answers. Answer the following question: What is a large language model?"""print(llm(input_)) A: A large language model is a neural network that is trained on a large amount of text data. It is able to generate text that is similar to the training data, and can be used for tasks such as language translation, question answering, and text summarization. A: A large language model is a neural network that is trained on a large amount of text data. It is able to generate text that is similar to the training data, and can be used for tasks such as language translation, question answering, and text summarization. A: A large language model is a neural network that is trained onPreviousTitan TakeoffNextTongyi QwenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai
The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more//together.ai ->: together_api_key or set as environment variable TOGETHER_API_KEY.Together API reference: https://docs.together.ai/reference/inferencefrom langchain.llms import Togetherllm = Together( model="togethercomputer/RedPajama-INCITE-7B-Base", temperature=0.7, max_tokens=128, top_k=1, # together_api_key="...")input_ = """You are a teacher with a deep knowledge of machine learning and AI. \You provide succinct and accurate answers. Answer the following question: What is a large language model?"""print(llm(input_)) A: A large language model is a neural network that is trained on a large amount of text data. It is able to generate text that is similar to the training data, and can be used for tasks such as language translation, question answering, and text summarization. A: A large language model is a neural network that is trained on a large amount of text data. It is able to generate text that is similar to the training data, and can be used for tasks such as language translation, question answering, and text summarization. A: A large language model is a neural network that is trained onPreviousTitan TakeoffNextTongyi QwenCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,664
NLP Cloud | 🦜️🔗 Langchain
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. ->: NLP Cloud | 🦜️🔗 Langchain
1,665
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsNLP CloudNLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.This example goes over how to use LangChain to interact with NLP Cloud models.pip install nlpcloud# get a token: https://docs.nlpcloud.com/#authenticationfrom getpass import getpassNLPCLOUD_API_KEY = getpass() ········import osos.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEYfrom langchain.llms import
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsNLP CloudNLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.This example goes over how to use LangChain to interact with NLP Cloud models.pip install nlpcloud# get a token: https://docs.nlpcloud.com/#authenticationfrom getpass import getpassNLPCLOUD_API_KEY = getpass() ········import osos.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEYfrom langchain.llms import
1,666
= NLPCLOUD_API_KEYfrom langchain.llms import NLPCloudfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = NLPCloud()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'PreviousMosaicMLNextOctoAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. ->: = NLPCLOUD_API_KEYfrom langchain.llms import NLPCloudfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = NLPCloud()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'PreviousMosaicMLNextOctoAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,667
Replicate | 🦜️🔗 Langchain
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: Replicate | 🦜️🔗 Langchain
1,668
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsReplicateOn this pageReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.This example goes over how to use LangChain to interact with Replicate modelsSetup​# magics to auto-reload external modules in case you are making changes to langchain while working on this notebook%autoreload 2To run this notebook, you'll need to create a replicate account and install the replicate python client.poetry run pip install replicate Collecting replicate Using cached replicate-0.9.0-py3-none-any.whl (21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsReplicateOn this pageReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.This example goes over how to use LangChain to interact with Replicate modelsSetup​# magics to auto-reload external modules in case you are making changes to langchain while working on this notebook%autoreload 2To run this notebook, you'll need to create a replicate account and install the replicate python client.poetry run pip install replicate Collecting replicate Using cached replicate-0.9.0-py3-none-any.whl (21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in
1,669
Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (1.10.9) Requirement already satisfied: requests>2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (2.28.2) Requirement already satisfied: typing-extensions>=4.2.0 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from pydantic>1->replicate) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0# get a token: https://replicate.com/accountfrom getpass import getpassREPLICATE_API_TOKEN = getpass()import osos.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKENfrom langchain.llms import Replicatefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainCalling a model‚ÄãFind a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version.For example, here is LLama-V2.llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (1.10.9) Requirement already satisfied: requests>2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (2.28.2) Requirement already satisfied: typing-extensions>=4.2.0 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from pydantic>1->replicate) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0# get a token: https://replicate.com/accountfrom getpass import getpassREPLICATE_API_TOKEN = getpass()import osos.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKENfrom langchain.llms import Replicatefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainCalling a model‚ÄãFind a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version.For example, here is LLama-V2.llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer
1,670
500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt) '1. Dogs do not have the ability to operate complex machinery like cars.\n2. Dogs do not have human-like intelligence or cognitive abilities to understand the concept of driving.\n3. Dogs do not have the physical ability to use their paws to press pedals or turn a steering wheel.\n4. Therefore, a dog cannot drive a car.'As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5Only the model param is required, but we can add other model params when initializing.For example, if we were running stable diffusion and wanted to change the image dimensions:Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})Note that only the first output of a model will be returned.llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")prompt = """Answer the following yes/no question by reasoning step by step. Can a dog drive a car?"""llm(prompt) 'No, dogs lack some of the brain functions required to operate a motor vehicle. They cannot focus and react in time to accelerate or brake correctly. Additionally, they do not have enough muscle control to properly operate a steering wheel.\n\n'We can call any replicate model using this syntax. For example, we can call stable diffusion.text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", model_kwargs={"image_dimensions": "512x512"},)image_output = text2image("A cat riding a motorcycle by Picasso")image_output 'https://pbxt.replicate.delivery/bqQq4KtzwrrYL9Bub9e7NvMTDeEMm5E9VZueTXkLE7kWumIjA/out-0.png'The model spits out a URL.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt) '1. Dogs do not have the ability to operate complex machinery like cars.\n2. Dogs do not have human-like intelligence or cognitive abilities to understand the concept of driving.\n3. Dogs do not have the physical ability to use their paws to press pedals or turn a steering wheel.\n4. Therefore, a dog cannot drive a car.'As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5Only the model param is required, but we can add other model params when initializing.For example, if we were running stable diffusion and wanted to change the image dimensions:Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})Note that only the first output of a model will be returned.llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")prompt = """Answer the following yes/no question by reasoning step by step. Can a dog drive a car?"""llm(prompt) 'No, dogs lack some of the brain functions required to operate a motor vehicle. They cannot focus and react in time to accelerate or brake correctly. Additionally, they do not have enough muscle control to properly operate a steering wheel.\n\n'We can call any replicate model using this syntax. For example, we can call stable diffusion.text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", model_kwargs={"image_dimensions": "512x512"},)image_output = text2image("A cat riding a motorcycle by Picasso")image_output 'https://pbxt.replicate.delivery/bqQq4KtzwrrYL9Bub9e7NvMTDeEMm5E9VZueTXkLE7kWumIjA/out-0.png'The model spits out a URL.
1,671
model spits out a URL. Let's render it.poetry run pip install Pillow Requirement already satisfied: Pillow in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (9.5.0) [notice] A new release of pip is available: 23.2 -> 23.2.1 [notice] To update, run: pip install --upgrade pipfrom PIL import Imageimport requestsfrom io import BytesIOresponse = requests.get(image_output)img = Image.open(BytesIO(response.content))imgStreaming Response‚ÄãYou can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Replicate( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""_ = llm(prompt) 1. Dogs do not have the physical ability to operate a vehicle.Stop SequencesYou can also specify stop sequences. If you have a definite stop sequence for the generation that you are going to parse with anyway, it is better (cheaper and faster!) to just cancel the generation once one or more stop sequences are reached, rather than letting the model ramble on till the specified max_length. Stop sequences work regardless of whether you are in streaming mode or not, and Replicate only charges you for the generation up until the stop sequence.import timellm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.01, "max_length": 500, "top_p": 1},)prompt = """User: What is the best way to learn python?Assistant:"""start_time = time.perf_counter()raw_output = llm(prompt)
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: model spits out a URL. Let's render it.poetry run pip install Pillow Requirement already satisfied: Pillow in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (9.5.0) [notice] A new release of pip is available: 23.2 -> 23.2.1 [notice] To update, run: pip install --upgrade pipfrom PIL import Imageimport requestsfrom io import BytesIOresponse = requests.get(image_output)img = Image.open(BytesIO(response.content))imgStreaming Response‚ÄãYou can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Replicate( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""_ = llm(prompt) 1. Dogs do not have the physical ability to operate a vehicle.Stop SequencesYou can also specify stop sequences. If you have a definite stop sequence for the generation that you are going to parse with anyway, it is better (cheaper and faster!) to just cancel the generation once one or more stop sequences are reached, rather than letting the model ramble on till the specified max_length. Stop sequences work regardless of whether you are in streaming mode or not, and Replicate only charges you for the generation up until the stop sequence.import timellm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.01, "max_length": 500, "top_p": 1},)prompt = """User: What is the best way to learn python?Assistant:"""start_time = time.perf_counter()raw_output = llm(prompt)
1,672
= time.perf_counter()raw_output = llm(prompt) # raw output, no stopend_time = time.perf_counter()print(f"Raw output:\n {raw_output}")print(f"Raw output runtime: {end_time - start_time} seconds")start_time = time.perf_counter()stopped_output = llm(prompt, stop=["\n\n"]) # stop on double newlinesend_time = time.perf_counter()print(f"Stopped output:\n {stopped_output}")print(f"Stopped output runtime: {end_time - start_time} seconds") Raw output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: 1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses that can help you get started with Python. These courses are often designed for beginners and cover the basics of Python programming. 2. Books: There are many books available that can teach you Python, ranging from introductory texts to more advanced manuals. Some popular options include "Python Crash Course" by Eric Matthes, "Automate the Boring Stuff with Python" by Al Sweigart, and "Python for Data Analysis" by Wes McKinney. 3. Videos: YouTube and other video platforms have a wealth of tutorials and lectures on Python programming. Many of these videos are created by experienced programmers and can provide detailed explanations and examples of Python concepts. 4. Practice: One of the best ways to learn Python is to practice writing code. Start with simple programs and gradually work your way up to more complex projects. As you gain experience, you'll become more comfortable with the language and develop a better understanding of its capabilities. 5. Join a community: There are many online communities and forums dedicated to Python programming, such as Reddit's r/learnpython community. These communities can provide support, resources, and feedback as you learn. 6. Take online courses: Many universities and organizations offer online
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: = time.perf_counter()raw_output = llm(prompt) # raw output, no stopend_time = time.perf_counter()print(f"Raw output:\n {raw_output}")print(f"Raw output runtime: {end_time - start_time} seconds")start_time = time.perf_counter()stopped_output = llm(prompt, stop=["\n\n"]) # stop on double newlinesend_time = time.perf_counter()print(f"Stopped output:\n {stopped_output}")print(f"Stopped output runtime: {end_time - start_time} seconds") Raw output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: 1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses that can help you get started with Python. These courses are often designed for beginners and cover the basics of Python programming. 2. Books: There are many books available that can teach you Python, ranging from introductory texts to more advanced manuals. Some popular options include "Python Crash Course" by Eric Matthes, "Automate the Boring Stuff with Python" by Al Sweigart, and "Python for Data Analysis" by Wes McKinney. 3. Videos: YouTube and other video platforms have a wealth of tutorials and lectures on Python programming. Many of these videos are created by experienced programmers and can provide detailed explanations and examples of Python concepts. 4. Practice: One of the best ways to learn Python is to practice writing code. Start with simple programs and gradually work your way up to more complex projects. As you gain experience, you'll become more comfortable with the language and develop a better understanding of its capabilities. 5. Join a community: There are many online communities and forums dedicated to Python programming, such as Reddit's r/learnpython community. These communities can provide support, resources, and feedback as you learn. 6. Take online courses: Many universities and organizations offer online
1,673
Many universities and organizations offer online courses on Python programming. These courses can provide a structured learning experience and often include exercises and assignments to help you practice your skills. 7. Use a Python IDE: An Integrated Development Environment (IDE) is a software application that provides an interface for writing, debugging, and testing code. Popular Python IDEs include PyCharm, Visual Studio Code, and Spyder. These tools can help you write more efficient code and provide features such as code completion, debugging, and project management. Which of the above options do you think is the best way to learn Python? Raw output runtime: 25.27470933299992 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are some suggestions: Stopped output runtime: 25.77039254200008 secondsChaining Calls‚ÄãThe whole point of langchain is to... chain! Here's an example of how do that.from langchain.chains import SimpleSequentialChainFirst, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model.dolly_llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")First prompt in the chainprompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=dolly_llm, prompt=prompt)Second prompt to get the logo for company descriptionsecond_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}",)chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)Third prompt, let's create the image based on the description output from prompt 2third_prompt = PromptTemplate(
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: Many universities and organizations offer online courses on Python programming. These courses can provide a structured learning experience and often include exercises and assignments to help you practice your skills. 7. Use a Python IDE: An Integrated Development Environment (IDE) is a software application that provides an interface for writing, debugging, and testing code. Popular Python IDEs include PyCharm, Visual Studio Code, and Spyder. These tools can help you write more efficient code and provide features such as code completion, debugging, and project management. Which of the above options do you think is the best way to learn Python? Raw output runtime: 25.27470933299992 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are some suggestions: Stopped output runtime: 25.77039254200008 secondsChaining Calls‚ÄãThe whole point of langchain is to... chain! Here's an example of how do that.from langchain.chains import SimpleSequentialChainFirst, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model.dolly_llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")First prompt in the chainprompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=dolly_llm, prompt=prompt)Second prompt to get the logo for company descriptionsecond_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}",)chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)Third prompt, let's create the image based on the description output from prompt 2third_prompt = PromptTemplate(
1,674
from prompt 2third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)Now let's run it!# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True)catchphrase = overall_chain.run("colorful socks")print(catchphrase) > Entering new SimpleSequentialChain chain... Colorful socks could be named after a song by The Beatles or a color (yellow, blue, pink). A good combination of letters and digits would be 6399. Apple also owns the domain 6399.com so this could be reserved for the Company. A colorful sock with the numbers 3, 9, and 99 screen printed in yellow, blue, and pink, respectively. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.png > Finished chain. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.pngresponse = requests.get( "https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png")img = Image.open(BytesIO(response.content))imgPreviousRELLMNextRunhouseSetupCalling a modelStreaming ResponseChaining CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. ->: from prompt 2third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)Now let's run it!# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True)catchphrase = overall_chain.run("colorful socks")print(catchphrase) > Entering new SimpleSequentialChain chain... Colorful socks could be named after a song by The Beatles or a color (yellow, blue, pink). A good combination of letters and digits would be 6399. Apple also owns the domain 6399.com so this could be reserved for the Company. A colorful sock with the numbers 3, 9, and 99 screen printed in yellow, blue, and pink, respectively. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.png > Finished chain. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.pngresponse = requests.get( "https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png")img = Image.open(BytesIO(response.content))imgPreviousRELLMNextRunhouseSetupCalling a modelStreaming ResponseChaining CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,675
Streaming | 🦜️🔗 Langchain
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. ->: Streaming | 🦜️🔗 Langchain
1,676
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OLanguage modelsLLMsStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.Currently, we support streaming for a broad range of LLM implementations, including but not limited to OpenAI, ChatOpenAI, ChatAnthropic, Hugging Face Text Generation Inference, and Replicate. This feature has been expanded to accommodate most of the models. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.from langchain.llms import OpenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = llm("Write me a song about sparkling water.") Verse 1 I'm sippin' on sparkling water, It's so refreshing and light, It's the perfect way to quench my thirst On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 2 I'm sippin' on sparkling water, It's so bubbly and bright, It's the perfect way to cool me down On a hot summer night. Chorus Sparkling water,
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​OPromptsLanguage modelsLLMsAsync APICustom LLMFake LLMHuman input LLMCachingSerializationStreamingTracking token usageChat modelsOutput parsersRetrievalChainsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesModel I/​OLanguage modelsLLMsStreamingStreamingSome LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.Currently, we support streaming for a broad range of LLM implementations, including but not limited to OpenAI, ChatOpenAI, ChatAnthropic, Hugging Face Text Generation Inference, and Replicate. This feature has been expanded to accommodate most of the models. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.from langchain.llms import OpenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)resp = llm("Write me a song about sparkling water.") Verse 1 I'm sippin' on sparkling water, It's so refreshing and light, It's the perfect way to quench my thirst On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 2 I'm sippin' on sparkling water, It's so bubbly and bright, It's the perfect way to cool me down On a hot summer night. Chorus Sparkling water,
1,677
a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed.We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.llm.generate(["Tell me a joke."]) Q: What did the fish say when it hit the wall? A: Dam! LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})PreviousSerializationNextTracking token usageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. ->: a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed.We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.llm.generate(["Tell me a joke."]) Q: What did the fish say when it hit the wall? A: Dam! LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})PreviousSerializationNextTracking token usageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,678
OpenLM | 🦜️🔗 Langchain
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. ->: OpenLM | 🦜️🔗 Langchain
1,679
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpenLMOn this pageOpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both.Setup​Install dependencies and set API keys.# Uncomment to install openlm and openai if you haven't already# !pip install openlm# !pip install openaifrom getpass import getpassimport osimport subprocess# Check if OPENAI_API_KEY environment variable is setif "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass()# Check if HF_API_TOKEN environment variable is setif "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsOpenLMOn this pageOpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both.Setup​Install dependencies and set API keys.# Uncomment to install openlm and openai if you haven't already# !pip install openlm# !pip install openaifrom getpass import getpassimport osimport subprocess# Check if OPENAI_API_KEY environment variable is setif "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass()# Check if HF_API_TOKEN environment variable is setif "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API
1,680
print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass()Using LangChain with OpenLM​Here we're going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace.from langchain.llms import OpenLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "What is the capital of France?"template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print( """Model: {}Result: {}""".format( model, result ) ) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far morePreviousOpenLLMNextPetalsSetupUsing LangChain with OpenLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. ->: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass()Using LangChain with OpenLM​Here we're going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace.from langchain.llms import OpenLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "What is the capital of France?"template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print( """Model: {}Result: {}""".format( model, result ) ) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far morePreviousOpenLLMNextPetalsSetupUsing LangChain with OpenLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,681
Baseten | 🦜️🔗 Langchain
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. ->: Baseten | 🦜️🔗 Langchain
1,682
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBasetenBasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.This example demonstrates using Langchain with models deployed on Baseten.SetupTo run this notebook, you'll need a Baseten account and an API key.You'll also need to install the Baseten Python package:pip install basetenimport basetenbaseten.login("YOUR_API_KEY")Single model callFirst, you'll need to deploy a model to Baseten.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Baseten# Load the modelwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)# Prompt the modelwizardlm("What is the difference between a Wizard and a
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsBasetenBasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.This example demonstrates using Langchain with models deployed on Baseten.SetupTo run this notebook, you'll need a Baseten account and an API key.You'll also need to install the Baseten Python package:pip install basetenimport basetenbaseten.login("YOUR_API_KEY")Single model callFirst, you'll need to deploy a model to Baseten.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Baseten# Load the modelwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)# Prompt the modelwizardlm("What is the difference between a Wizard and a
1,683
is the difference between a Wizard and a Sorcerer?")Chained model callsWe can chain together multiple calls to one or multiple models, which is the whole point of Langchain!This example uses WizardLM to plan a meal with an entree, three sides, and an alcoholic and non-alcoholic beverage pairing.from langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# Build the first link in the chainprompt = PromptTemplate( input_variables=["cuisine"], template="Name a complex entree for a {cuisine} dinner. Respond with just the name of a single dish.",)link_one = LLMChain(llm=wizardlm, prompt=prompt)# Build the second link in the chainprompt = PromptTemplate( input_variables=["entree"], template="What are three sides that would go with {entree}. Respond with only a list of the sides.",)link_two = LLMChain(llm=wizardlm, prompt=prompt)# Build the third link in the chainprompt = PromptTemplate( input_variables=["sides"], template="What is one alcoholic and one non-alcoholic beverage that would go well with this list of sides: {sides}. Respond with only the names of the beverages.",)link_three = LLMChain(llm=wizardlm, prompt=prompt)# Run the full chain!menu_maker = SimpleSequentialChain( chains=[link_one, link_two, link_three], verbose=True)menu_maker.run("South Indian")PreviousBananaNextBeamCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.
Baseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. ->: is the difference between a Wizard and a Sorcerer?")Chained model callsWe can chain together multiple calls to one or multiple models, which is the whole point of Langchain!This example uses WizardLM to plan a meal with an entree, three sides, and an alcoholic and non-alcoholic beverage pairing.from langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# Build the first link in the chainprompt = PromptTemplate( input_variables=["cuisine"], template="Name a complex entree for a {cuisine} dinner. Respond with just the name of a single dish.",)link_one = LLMChain(llm=wizardlm, prompt=prompt)# Build the second link in the chainprompt = PromptTemplate( input_variables=["entree"], template="What are three sides that would go with {entree}. Respond with only a list of the sides.",)link_two = LLMChain(llm=wizardlm, prompt=prompt)# Build the third link in the chainprompt = PromptTemplate( input_variables=["sides"], template="What is one alcoholic and one non-alcoholic beverage that would go well with this list of sides: {sides}. Respond with only the names of the beverages.",)link_three = LLMChain(llm=wizardlm, prompt=prompt)# Run the full chain!menu_maker = SimpleSequentialChain( chains=[link_one, link_two, link_three], verbose=True)menu_maker.run("South Indian")PreviousBananaNextBeamCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,684
Llama.cpp | 🦜️🔗 Langchain
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: Llama.cpp | 🦜️🔗 Langchain
1,685
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsLlama.cppOn this pageLlama.cppllama-cpp-python is a Python binding for llama.cpp. It supports inference for many LLMs, which can be accessed on HuggingFace.This notebook goes over how to run llama-cpp-python within LangChain.Note: new versions of llama-cpp-python use GGUF model files (see here).This is a breaking change.To convert existing GGML models to GGUF you can run the following in llama.cpp:python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input models/openorca-platypus2-13b.ggmlv3.q4_0.bin --output models/openorca-platypus2-13b.gguf.q4_0.binInstallation​There are different options on how to install the llama-cpp package: CPU usageCPU + GPU (using one of many BLAS backends)Metal GPU (MacOS with Apple Silicon Chip) CPU only installation​pip install llama-cpp-pythonInstallation with OpenBLAS / cuBLAS / CLBlast​lama.cpp supports multiple BLAS backends for faster processing. Use the FORCE_CMAKE=1 environment variable
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsLLMsAI21Aleph AlphaAmazon API GatewayAnyscaleArceeAzure MLAzure OpenAIBaidu QianfanBananaBasetenBeamBedrockBittensorCerebriumAIChatGLMClarifaiCohereC TransformersCTranslate2DatabricksDeepInfraDeepSparseEden AIFireworksForefrontAIGCP Vertex AIGooseAIGPT4AllGradientHugging Face HubHugging Face Local PipelinesHuggingface TextGen InferenceJavelin AI Gateway TutorialJSONFormerKoboldAI APILlama.cppLLM Caching integrationsManifestMinimaxModalMosaicMLNLP CloudOctoAIOllamaOpaquePromptsOpenAIOpenLLMOpenLMPetalsPipelineAIPredibasePrediction GuardPromptLayer OpenAIRELLMReplicateRunhouseSageMakerEndpointStochasticAINebula (Symbl.ai)TextGenTitan TakeoffTogether AITongyi QwenvLLMWriterXorbits Inference (Xinference)YandexGPTChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat loadersComponentsLLMsLlama.cppOn this pageLlama.cppllama-cpp-python is a Python binding for llama.cpp. It supports inference for many LLMs, which can be accessed on HuggingFace.This notebook goes over how to run llama-cpp-python within LangChain.Note: new versions of llama-cpp-python use GGUF model files (see here).This is a breaking change.To convert existing GGML models to GGUF you can run the following in llama.cpp:python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input models/openorca-platypus2-13b.ggmlv3.q4_0.bin --output models/openorca-platypus2-13b.gguf.q4_0.binInstallation​There are different options on how to install the llama-cpp package: CPU usageCPU + GPU (using one of many BLAS backends)Metal GPU (MacOS with Apple Silicon Chip) CPU only installation​pip install llama-cpp-pythonInstallation with OpenBLAS / cuBLAS / CLBlast​lama.cpp supports multiple BLAS backends for faster processing. Use the FORCE_CMAKE=1 environment variable
1,686
Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the desired BLAS backend (source).Example installation with cuBLAS backend:CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with Metal‚Äãllama.cpp supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the Metal support (source).Example installation with Metal Support:CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with Windows‚ÄãIt is stable to install the llama-cpp-python library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.Requirements to install the llama-cpp-python,gitpythoncmakeVisual Studio Community (make sure you install this with the following settings)Desktop development with C++Python developmentLinux embedded development with C++Clone git repository recursively to get llama.cpp submodule as well git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.gitOpen up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.set FORCE_CMAKE=1set
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the desired BLAS backend (source).Example installation with cuBLAS backend:CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with Metal‚Äãllama.cpp supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the Metal support (source).Example installation with Metal Support:CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with Windows‚ÄãIt is stable to install the llama-cpp-python library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.Requirements to install the llama-cpp-python,gitpythoncmakeVisual Studio Community (make sure you install this with the following settings)Desktop development with C++Python developmentLinux embedded development with C++Clone git repository recursively to get llama.cpp submodule as well git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.gitOpen up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.set FORCE_CMAKE=1set
1,687
of the following variables.set FORCE_CMAKE=1set CMAKE_ARGS=-DLLAMA_CUBLAS=OFFYou can ignore the second environment variable if you have an NVIDIA GPU.Compiling and installing‚ÄãIn the same command prompt (anaconda prompt) you set the variables, you can cd into llama-cpp-python directory and run the following commands.python setup.py cleanpython setup.py installUsage‚ÄãMake sure you are following all instructions to install all necessary model files.You don't need an API_TOKEN as you will run the LLM locally.It is worth understanding which models are suitable to be used on the desired machine.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerConsider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template.template = """Question: {question}Answer: Let's work this out in a step by step way to be sure we have the right answer."""prompt = PromptTemplate(template=template, input_variables=["question"])# Callbacks support token-wise streamingcallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])CPU‚ÄãExample using a LLaMA 2 7B model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", temperature=0.75, max_tokens=2000, top_p=1, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)prompt = """Question: A rap battle between Stephen Colbert and John Oliver"""llm(prompt) Stephen Colbert: Yo, John, I heard you've been talkin' smack about me on your show. Let me tell you somethin', pal, I'm the king of late-night TV My satire is sharp as a razor, it cuts deeper than a knife While you're just a british bloke
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: of the following variables.set FORCE_CMAKE=1set CMAKE_ARGS=-DLLAMA_CUBLAS=OFFYou can ignore the second environment variable if you have an NVIDIA GPU.Compiling and installing‚ÄãIn the same command prompt (anaconda prompt) you set the variables, you can cd into llama-cpp-python directory and run the following commands.python setup.py cleanpython setup.py installUsage‚ÄãMake sure you are following all instructions to install all necessary model files.You don't need an API_TOKEN as you will run the LLM locally.It is worth understanding which models are suitable to be used on the desired machine.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerConsider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template.template = """Question: {question}Answer: Let's work this out in a step by step way to be sure we have the right answer."""prompt = PromptTemplate(template=template, input_variables=["question"])# Callbacks support token-wise streamingcallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])CPU‚ÄãExample using a LLaMA 2 7B model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", temperature=0.75, max_tokens=2000, top_p=1, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)prompt = """Question: A rap battle between Stephen Colbert and John Oliver"""llm(prompt) Stephen Colbert: Yo, John, I heard you've been talkin' smack about me on your show. Let me tell you somethin', pal, I'm the king of late-night TV My satire is sharp as a razor, it cuts deeper than a knife While you're just a british bloke
1,688
than a knife While you're just a british bloke tryin' to be funny with your accent and your wit. John Oliver: Oh Stephen, don't be ridiculous, you may have the ratings but I got the real talk. My show is the one that people actually watch and listen to, not just for the laughs but for the facts. While you're busy talkin' trash, I'm out here bringing the truth to light. Stephen Colbert: Truth? Ha! You think your show is about truth? Please, it's all just a joke to you. You're just a fancy-pants british guy tryin' to be funny with your news and your jokes. While I'm the one who's really makin' a difference, with my sat llama_print_timings: load time = 358.60 ms llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second) llama_print_timings: prompt eval time = 613.36 ms / 16 tokens ( 38.33 ms per token, 26.09 tokens per second) llama_print_timings: eval time = 10151.17 ms / 255 runs ( 39.81 ms per token, 25.12 tokens per second) llama_print_timings: total time = 11332.41 ms "\nStephen Colbert:\nYo, John, I heard you've been talkin' smack about me on your show.\nLet me tell you somethin', pal, I'm the king of late-night TV\nMy satire is sharp as a razor, it cuts deeper than a knife\nWhile you're just a british bloke tryin' to be funny with your accent and your wit.\nJohn Oliver:\nOh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\nMy show is the one that people actually watch and listen to, not just for the laughs but for the facts.\nWhile you're busy talkin' trash, I'm out here bringing the truth to light.\nStephen Colbert:\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\nWhile I'm the one who's really makin' a difference, with my sat"Example using a LLaMA v1 model#
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: than a knife While you're just a british bloke tryin' to be funny with your accent and your wit. John Oliver: Oh Stephen, don't be ridiculous, you may have the ratings but I got the real talk. My show is the one that people actually watch and listen to, not just for the laughs but for the facts. While you're busy talkin' trash, I'm out here bringing the truth to light. Stephen Colbert: Truth? Ha! You think your show is about truth? Please, it's all just a joke to you. You're just a fancy-pants british guy tryin' to be funny with your news and your jokes. While I'm the one who's really makin' a difference, with my sat llama_print_timings: load time = 358.60 ms llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second) llama_print_timings: prompt eval time = 613.36 ms / 16 tokens ( 38.33 ms per token, 26.09 tokens per second) llama_print_timings: eval time = 10151.17 ms / 255 runs ( 39.81 ms per token, 25.12 tokens per second) llama_print_timings: total time = 11332.41 ms "\nStephen Colbert:\nYo, John, I heard you've been talkin' smack about me on your show.\nLet me tell you somethin', pal, I'm the king of late-night TV\nMy satire is sharp as a razor, it cuts deeper than a knife\nWhile you're just a british bloke tryin' to be funny with your accent and your wit.\nJohn Oliver:\nOh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\nMy show is the one that people actually watch and listen to, not just for the laughs but for the facts.\nWhile you're busy talkin' trash, I'm out here bringing the truth to light.\nStephen Colbert:\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\nWhile I'm the one who's really makin' a difference, with my sat"Example using a LLaMA v1 model#
1,689
with my sat"Example using a LLaMA v1 model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. First, find out when Justin Bieber was born. 2. We know that Justin Bieber was born on March 1, 1994. 3. Next, we need to look up when the Super Bowl was played in that year. 4. The Super Bowl was played on January 28, 1995. 5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers. llama_print_timings: load time = 434.15 ms llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token) llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token) llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'GPU‚ÄãIf the installation with BLAS backend was correct, you will see a BLAS = 1 indicator in model properties.Two of the most important parameters for use with GPU are:n_gpu_layers - determines how many layers of the model are offloaded to your GPU.n_batch - how many tokens are processed in parallel. Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers =
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: with my sat"Example using a LLaMA v1 model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. First, find out when Justin Bieber was born. 2. We know that Justin Bieber was born on March 1, 1994. 3. Next, we need to look up when the Super Bowl was played in that year. 4. The Super Bowl was played on January 28, 1995. 5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers. llama_print_timings: load time = 434.15 ms llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token) llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token) llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'GPU‚ÄãIf the installation with BLAS backend was correct, you will see a BLAS = 1 indicator in model properties.Two of the most important parameters for use with GPU are:n_gpu_layers - determines how many layers of the model are offloaded to your GPU.n_batch - how many tokens are processed in parallel. Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers =
1,690
wrapper code for more details).n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994. 2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994. 3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup. So, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl. llama_print_timings: load time = 427.63 ms llama_print_timings: sample time = 115.85 ms / 164 runs ( 0.71 ms per token, 1415.67 tokens per second) llama_print_timings: prompt eval time = 427.53 ms / 45 tokens ( 9.50 ms per token, 105.26 tokens per second) llama_print_timings: eval time = 4526.53 ms / 163 runs ( 27.77 ms per token, 36.01 tokens per second) llama_print_timings: total time = 5293.77 ms "\n\n1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994.\n\n2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994.\n\n3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: wrapper code for more details).n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994. 2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994. 3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup. So, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl. llama_print_timings: load time = 427.63 ms llama_print_timings: sample time = 115.85 ms / 164 runs ( 0.71 ms per token, 1415.67 tokens per second) llama_print_timings: prompt eval time = 427.53 ms / 45 tokens ( 9.50 ms per token, 105.26 tokens per second) llama_print_timings: eval time = 4526.53 ms / 163 runs ( 27.77 ms per token, 36.01 tokens per second) llama_print_timings: total time = 5293.77 ms "\n\n1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994.\n\n2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994.\n\n3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII
1,691
faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup.\n\nSo, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl."Metal‚ÄãIf the installation with Metal was correct, you will see a NEON = 1 indicator in model properties.Two of the most important GPU parameters are:n_gpu_layers - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to 1 is enough for Metaln_batch - how many tokens are processed in parallel, default is 8, set to bigger number.f16_kv - for some reason, Metal only support True, otherwise you will get error such as Asserting on type 0
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup.\n\nSo, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl."Metal‚ÄãIf the installation with Metal was correct, you will see a NEON = 1 indicator in model properties.Two of the most important GPU parameters are:n_gpu_layers - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to 1 is enough for Metaln_batch - how many tokens are processed in parallel, default is 8, set to bigger number.f16_kv - for some reason, Metal only support True, otherwise you will get error such as Asserting on type 0
1,692
GGML_ASSERT: .../ggml-metal.m:706: false && "not implemented"Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)The console log will show the following log to indicate Metal was enable properly.ggml_metal_init: allocatingggml_metal_init: using MPS...You also could check Activity Monitor by watching the GPU usage of the process, the CPU usage will drop dramatically after turn on n_gpu_layers=1. For the first call to the LLM, the performance may be slow due to the model compilation in Metal GPU.Grammars‚ÄãWe can use grammars to constrain model outputs and sample tokens based on the rules defined in them.To demonstrate this concept, we've included sample grammar files, that will be used in the examples below.Creating gbnf grammar files can be time-consuming, but if you have a use-case where output schemas are important, there are two tools that can help:Online grammar generator app that converts TypeScript interface definitions to gbnf file.Python script for converting json schema to gbnf file. You can for example create pydantic object, generate its JSON schema using .schema_json() method, and then use this script to convert it to gbnf file.In the first example, supply the path to the specifed json.gbnf file in order to produce JSON:n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: GGML_ASSERT: .../ggml-metal.m:706: false && "not implemented"Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)The console log will show the following log to indicate Metal was enable properly.ggml_metal_init: allocatingggml_metal_init: using MPS...You also could check Activity Monitor by watching the GPU usage of the process, the CPU usage will drop dramatically after turn on n_gpu_layers=1. For the first call to the LLM, the performance may be slow due to the model compilation in Metal GPU.Grammars‚ÄãWe can use grammars to constrain model outputs and sample tokens based on the rules defined in them.To demonstrate this concept, we've included sample grammar files, that will be used in the examples below.Creating gbnf grammar files can be time-consuming, but if you have a use-case where output schemas are important, there are two tools that can help:Online grammar generator app that converts TypeScript interface definitions to gbnf file.Python script for converting json schema to gbnf file. You can for example create pydantic object, generate its JSON schema using .schema_json() method, and then use this script to convert it to gbnf file.In the first example, supply the path to the specifed json.gbnf file in order to produce JSON:n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount
1,693
be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/json.gbnf",)result=llm("Describe a person in JSON format:") { "name": "John Doe", "age": 34, "": { "title": "Software Developer", "company": "Google" }, "interests": [ "Sports", "Music", "Cooking" ], "address": { "street_number": 123, "street_name": "Oak Street", "city": "Mountain View", "state": "California", "postal_code": 94040 }} llama_print_timings: load time = 357.51 ms llama_print_timings: sample time = 1213.30 ms / 144 runs ( 8.43 ms per token, 118.68 tokens per second) llama_print_timings: prompt eval time = 356.78 ms / 9 tokens ( 39.64 ms per token, 25.23 tokens per second) llama_print_timings: eval time = 3947.16 ms / 143 runs ( 27.60 ms per token, 36.23 tokens per second) llama_print_timings: total time = 5846.21 msWe can also supply list.gbnf to return a list:n_gpu_layers = 1 n_batch = 512llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/json.gbnf",)result=llm("Describe a person in JSON format:") { "name": "John Doe", "age": 34, "": { "title": "Software Developer", "company": "Google" }, "interests": [ "Sports", "Music", "Cooking" ], "address": { "street_number": 123, "street_name": "Oak Street", "city": "Mountain View", "state": "California", "postal_code": 94040 }} llama_print_timings: load time = 357.51 ms llama_print_timings: sample time = 1213.30 ms / 144 runs ( 8.43 ms per token, 118.68 tokens per second) llama_print_timings: prompt eval time = 356.78 ms / 9 tokens ( 39.64 ms per token, 25.23 tokens per second) llama_print_timings: eval time = 3947.16 ms / 143 runs ( 27.60 ms per token, 36.23 tokens per second) llama_print_timings: total time = 5846.21 msWe can also supply list.gbnf to return a list:n_gpu_layers = 1 n_batch = 512llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,
1,694
verbose=True, grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/list.gbnf",)result=llm("List of top-3 my favourite books:") ["The Catcher in the Rye", "Wuthering Heights", "Anna Karenina"] llama_print_timings: load time = 322.34 ms llama_print_timings: sample time = 232.60 ms / 26 runs ( 8.95 ms per token, 111.78 tokens per second) llama_print_timings: prompt eval time = 321.90 ms / 11 tokens ( 29.26 ms per token, 34.17 tokens per second) llama_print_timings: eval time = 680.82 ms / 25 runs ( 27.23 ms per token, 36.72 tokens per second) llama_print_timings: total time = 1295.27 msPreviousKoboldAI APINextLLM Caching integrationsInstallationCPU only installationInstallation with OpenBLAS / cuBLAS / CLBlastInstallation with MetalInstallation with WindowsUsageCPUGPUMetalGrammarsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
llama-cpp-python is a Python binding for llama.cpp.
llama-cpp-python is a Python binding for llama.cpp. ->: verbose=True, grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/list.gbnf",)result=llm("List of top-3 my favourite books:") ["The Catcher in the Rye", "Wuthering Heights", "Anna Karenina"] llama_print_timings: load time = 322.34 ms llama_print_timings: sample time = 232.60 ms / 26 runs ( 8.95 ms per token, 111.78 tokens per second) llama_print_timings: prompt eval time = 321.90 ms / 11 tokens ( 29.26 ms per token, 34.17 tokens per second) llama_print_timings: eval time = 680.82 ms / 25 runs ( 27.23 ms per token, 36.72 tokens per second) llama_print_timings: total time = 1295.27 msPreviousKoboldAI APINextLLM Caching integrationsInstallationCPU only installationInstallation with OpenBLAS / cuBLAS / CLBlastInstallation with MetalInstallation with WindowsUsageCPUGPUMetalGrammarsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,695
LLM | 🦜️🔗 Langchain
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. ->: LLM | 🦜️🔗 Langchain
1,696
Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​from langchain.prompts import PromptTemplatefrom langchain.llms import OpenAIfrom langchain.chains import LLMChainprompt_template = "What is a good name for a company that makes {product}?"llm = OpenAI(temperature=0)llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template))llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'}Additional ways of running LLMChain​Aside from __call__ and run methods shared by all Chain object, LLMChain offers a few more ways of calling the chain logic:apply allows you run the chain against a list of inputs:input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"}]llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}]generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason.llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!',
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. ->: Skip to main content🦜️🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKGet startedIntroductionInstallationQuickstartLangChain Expression LanguageInterfaceHow toCookbookLangChain Expression Language (LCEL)Why use LCEL?ModulesModel I/​ORetrievalChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsMemoryAgentsCallbacksModulesSecurityGuidesMoreModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​from langchain.prompts import PromptTemplatefrom langchain.llms import OpenAIfrom langchain.chains import LLMChainprompt_template = "What is a good name for a company that makes {product}?"llm = OpenAI(temperature=0)llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template))llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'}Additional ways of running LLMChain​Aside from __call__ and run methods shared by all Chain object, LLMChain offers a few more ways of calling the chain logic:apply allows you run the chain against a list of inputs:input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"}]llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}]generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason.llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!',
1,697
generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict.# Single input examplellm_chain.predict(product="colorful socks") '\n\nSocktastic!'# Multiple inputs exampletemplate = """Tell me a {adjective} joke about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'Parsing the outputs‚ÄãBy default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply.With predict:from langchain.output_parsers import CommaSeparatedListOutputParseroutput_parser = CommaSeparatedListOutputParser()template = """List all the colors in a rainbow"""prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet'With predict_and_parse:llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']Initialize from string‚ÄãYou can also construct an LLMChain from a string template directly.template = """Tell me a {adjective} joke about {subject}."""llm_chain = LLMChain.from_string(llm=llm,
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. ->: generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict.# Single input examplellm_chain.predict(product="colorful socks") '\n\nSocktastic!'# Multiple inputs exampletemplate = """Tell me a {adjective} joke about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'Parsing the outputs‚ÄãBy default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply.With predict:from langchain.output_parsers import CommaSeparatedListOutputParseroutput_parser = CommaSeparatedListOutputParser()template = """List all the colors in a rainbow"""prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet'With predict_and_parse:llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']Initialize from string‚ÄãYou can also construct an LLMChain from a string template directly.template = """Tell me a {adjective} joke about {subject}."""llm_chain = LLMChain.from_string(llm=llm,
1,698
= LLMChain.from_string(llm=llm, template=template)llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'PreviousFoundationalNextRouterGet startedAdditional ways of running LLMChainParsing the outputsInitialize from stringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. ->: = LLMChain.from_string(llm=llm, template=template)llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'PreviousFoundationalNextRouterGet startedAdditional ways of running LLMChainParsing the outputsInitialize from stringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
1,699
Router | 🦜️🔗 Langchain
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. ->: Router | 🦜️🔗 Langchain