id
stringlengths 14
16
| text
stringlengths 31
3.14k
| source
stringlengths 58
124
|
---|---|---|
436d158c6714-0 | .rst
.pdf
Chat Models
Chat Models#
Note
Conceptual Guide
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality the LangChain LLM class provides.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
previous
LLMs
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat.html |
7d8049f48424-0 | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
How to use the async API for LLMs
How to write a custom LLM wrapper
How (and why) to use the fake LLM
How to cache LLM calls
How to serialize LLM classes
How to stream LLM and Chat Model responses
How to track token usage
previous
Getting Started
next
How to use the async API for LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/how_to_guides.html |
eb695d5eca4c-0 | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section.
For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string.
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'), | /content/https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
eb695d5eca4c-1 | Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
You can also access provider specific information that is returned. This information is NOT standardized across providers.
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}} | /content/https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
eb695d5eca4c-2 | 'prompt_tokens': 120}}
Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.
Notice that by default the tokens are estimated using tiktoken (except for legacy version <3.8, where a Hugging Face tokenizer is used)
llm.get_num_tokens("what a joke")
3
previous
LLMs
next
Generic Functionality
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/getting_started.html |
ba1efa0369c9-0 | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Azure OpenAI
Banana
CerebriumAI
Cohere
DeepInfra
ForefrontAI
GooseAI
GPT4All
Hugging Face Hub
Hugging Face Local Pipelines
Llama-cpp
Manifest
Modal
NLP Cloud
OpenAI
Petals
PredictionGuard
PromptLayer OpenAI
Replicate
Runhouse
SageMakerEndpoint
StochasticAI
Writer
previous
How to track token usage
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations.html |
a9ae30f98c00-0 | .ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
from langchain.llms import OpenAI
In Memory Cache#
import langchain
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 14.2 ms, sys: 4.9 ms, total: 19.1 ms
Wall time: 1.1 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 162 µs, sys: 7 µs, total: 169 µs
Wall time: 175 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-1 | %%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
GPTCache#
We can use GPTCache for exact match caching OR to cache results based on semantic similarity
Let’s first start with an example of exact match
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0 | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-2 | i = 0
file_prefix = "data_map"
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=cache_path),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 8.6 ms, sys: 3.82 ms, total: 12.4 ms
Wall time: 881 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 286 µs, sys: 21 µs, total: 307 µs
Wall time: 316 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Let’s now show an example of similarity caching
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
from gptcache.manager import get_data_manager, CacheBase, VectorBase
from gptcache import Cache
from gptcache.embedding import Onnx | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-3 | from gptcache import Cache
from gptcache.embedding import Onnx
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0
file_prefix = "data_map"
llm_cache = Cache()
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
onnx = Onnx()
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=onnx.dimension)
data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2)
cache_obj.init(
pre_embedding_func=get_prompt,
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 1.01 s, sys: 153 ms, total: 1.16 s
Wall time: 2.49 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-4 | %%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 745 ms, sys: 13.2 ms, total: 758 ms
Wall time: 136 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is not an exact match, but semantically within distance so it hits!
llm("Tell me joke")
CPU times: user 737 ms, sys: 7.79 ms, total: 745 ms
Wall time: 135 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base() | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-5 | Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-6 | %%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs) | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-7 | %%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
previous
How (and why) to use the fake LLM
next
How to serialize LLM classes
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
a9ae30f98c00-8 | Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html |
45ec88dd33c5-0 | .ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
45ec88dd33c5-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
How to stream LLM and Chat Model responses
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/llm_serialization.html |
4aa88c9c3f0b-0 | .ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)
Tokens Used: 42
Prompt Tokens: 4
Completion Tokens: 38
Successful Requests: 1
Total Cost (USD): $0.00084
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
91
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm) | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
4aa88c9c3f0b-1 | agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
4aa88c9c3f0b-2 | Thought: I now know the final answer.
Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012
previous
How to stream LLM and Chat Model responses
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/token_usage_tracking.html |
15111425ac48-0 | .ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
How to write a custom LLM wrapper
next
How to cache LLM calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html |
83d73f0bc834-0 | .ipynb
.pdf
How to write a custom LLM wrapper
How to write a custom LLM wrapper#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call method that takes in a string, some optional stop words, and returns a string
There is a second optional thing it can implement:
An _identifying_params property that is used to help with printing of this class. Should return a dictionary.
Let’s implement a very simple custom LLM that just returns the first N characters of the input.
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
We can now use this as an any other LLM.
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
83d73f0bc834-1 | print(llm)
CustomLLM
Params: {'n': 10}
previous
How to use the async API for LLMs
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/custom_llm.html |
00708bc0c7be-0 | .ipynb
.pdf
How to use the async API for LLMs
How to use the async API for LLMs#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, OpenAI, PromptLayerOpenAI, ChatOpenAI and Anthropic are supported, but async support for other LLMs is on the roadmap.
You can use the agenerate method to call an OpenAI LLM asynchronously.
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
00708bc0c7be-1 | elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
previous | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
00708bc0c7be-2 | Serial executed in 5.77 seconds.
previous
Generic Functionality
next
How to write a custom LLM wrapper
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/async_llm.html |
27b0f42ed7d2-0 | .ipynb
.pdf
How to stream LLM and Chat Model responses
How to stream LLM and Chat Model responses#
LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI. and Anthropic implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.
from langchain.llms import OpenAI, Anthropic
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water, | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
27b0f42ed7d2-1 | On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
Here’s an example with the ChatOpenAI chat model implementation:
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
27b0f42ed7d2-2 | Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's oh so pure
Sparkling water, I can't ignore
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Verse 2:
No sugar, no calories, just H2O
A drink that's good for me, don't you know
With lemon or lime, you're even better
Sparkling water, you're my forever
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Outro:
Sparkling water, you're the one for me
I'll never let you go, can't you see
You're my drink of choice, forevermore
Sparkling water, I adore.
Here is an example with the Anthropic LLM implementation, which uses their claude model.
llm = Anthropic(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
llm("Write me a song about sparkling water.") | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
27b0f42ed7d2-3 | llm("Write me a song about sparkling water.")
Sparkling water, bubbles so bright,
Fizzing and popping in the light.
No sugar or calories, a healthy delight,
Sparkling water, refreshing and light.
Carbonation that tickles the tongue,
In flavors of lemon and lime unsung.
Sparkling water, a drink quite all right,
Bubbles sparkling in the light.
'\nSparkling water, bubbles so bright,\n\nFizzing and popping in the light.\n\nNo sugar or calories, a healthy delight,\n\nSparkling water, refreshing and light.\n\nCarbonation that tickles the tongue,\n\nIn flavors of lemon and lime unsung.\n\nSparkling water, a drink quite all right,\n\nBubbles sparkling in the light.'
previous
How to serialize LLM classes
next
How to track token usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html |
f6b2d79ecb93-0 | .ipynb
.pdf
Llama-cpp
Llama-cpp#
llama-cpp is a Python binding for llama.cpp.
It supports several LLMs.
This notebook goes over how to run llama-cpp within LangChain.
!pip install llama-cpp-python
Make sure you are following all instructions to install all necessary model files.
You don’t need an API_TOKEN!
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
f6b2d79ecb93-1 | llm_chain.run(question)
First we need to identify what year Justin Beiber was born in. A quick google search reveals that he was born on March 1st, 1994. Now we know when the Super Bowl was played in, so we can look up which NFL team won it. The NFL Superbowl of the year 1994 was won by the San Francisco 49ers against the San Diego Chargers.
' First we need to identify what year Justin Beiber was born in. A quick google search reveals that he was born on March 1st, 1994. Now we know when the Super Bowl was played in, so we can look up which NFL team won it. The NFL Superbowl of the year 1994 was won by the San Francisco 49ers against the San Diego Chargers.'
previous
Hugging Face Local Pipelines
next
Manifest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html |
9635cdd1a501-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
The Luminous series is a family of large language models.
This example goes over how to use LangChain to interact with Aleph Alpha models
# Install the package
!pip install aleph-alpha-client
# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token
from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Azure OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/aleph_alpha.html |
ebe6c14d85bd-0 | .ipynb
.pdf
Azure OpenAI
Contents
API configuration
Deployments
Azure OpenAI#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example: | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
ebe6c14d85bd-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
!pip install openai
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002")
# Run the LLM
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Aleph Alpha
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/azure_openai_example.html |
e386142578f1-0 | .ipynb
.pdf
PredictionGuard
Contents
Basic LLM usage
Chaining
PredictionGuard#
How to use PredictionGuard wrapper
! pip install predictionguard langchain
import predictionguard as pg
from langchain.llms import PredictionGuard
Basic LLM usage#
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
pgllm("Tell me a joke")
Chaining#
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
template = """Write a {adjective} poem about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
llm_chain.predict(adjective="sad", subject="ducks")
previous
Petals
next
PromptLayer OpenAI
Contents
Basic LLM usage
Chaining
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/predictionguard.html |
3a3ab32de202-0 | .ipynb
.pdf
Hugging Face Hub
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
Hugging Face Hub#
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
This example showcases how to connect to the Hugging Face Hub.
To use, you should have the huggingface_hub python package installed.
!pip install huggingface_hub > /dev/null
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token
from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
Select a Model
from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"]) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
3a3ab32de202-1 | llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who won the FIFA World Cup in the year 1994? "
print(llm_chain.run(question))
Examples#
Below are some examples of models you can access through the Hugging Face Hub integration.
StableLM, by Stability AI#
See Stability AI’s organization page for a list of available models.
repo_id = "stabilityai/stablelm-tuned-alpha-3b"
# Others include stabilityai/stablelm-base-alpha-3b
# as well as 7B parameter versions
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Dolly, by DataBricks#
See DataBricks organization page for a list of available models.
from langchain import HuggingFaceHub
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Camel, by Writer#
See Writer’s organization page for a list of available models.
from langchain import HuggingFaceHub | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
3a3ab32de202-2 | from langchain import HuggingFaceHub
repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
# Reuse the prompt and question from above.
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
And many more!
previous
GPT4All
next
Hugging Face Local Pipelines
Contents
Examples
StableLM, by Stability AI
Dolly, by DataBricks
Camel, by Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html |
8796bc1a5346-0 | .ipynb
.pdf
Writer
Writer#
Writer is a platform to generate different language content.
This example goes over how to use LangChain to interact with Writer models.
You have to get the WRITER_API_KEY here.
from getpass import getpass
WRITER_API_KEY = getpass()
import os
os.environ["WRITER_API_KEY"] = WRITER_API_KEY
from langchain.llms import Writer
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.
llm = Writer()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
StochasticAI
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/writer.html |
c47d5b18f5c5-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
Another example of using Manifest with Langchain.
!pip install manifest-ml
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
c47d5b18f5c5-1 | mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01} | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
c47d5b18f5c5-2 | ),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Llama-cpp
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/manifest.html |
6e7a1f53c253-0 | .ipynb
.pdf
CerebriumAI
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI#
Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
# Install the package
!pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain# | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
6e7a1f53c253-1 | Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Banana
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/cerebriumai_example.html |
b66587f0920c-0 | .ipynb
.pdf
Hugging Face Local Pipelines
Contents
Load the model
Integrate the model in an LLMChain
Hugging Face Local Pipelines#
Hugging Face models can be run locally through the HuggingFacePipeline class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HugigngFaceHub notebook.
To use, you should have the transformers python package installed.
!pip install transformers > /dev/null
Load the model#
from langchain import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64})
WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused'))
Integrate the model in an LLMChain#
from langchain import PromptTemplate, LLMChain | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
b66587f0920c-1 | from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is electroencephalography?"
print(llm_chain.run(question))
/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused'))
First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed
previous
Hugging Face Hub
next
Llama-cpp
Contents
Load the model
Integrate the model in an LLMChain | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
b66587f0920c-2 | Llama-cpp
Contents
Load the model
Integrate the model in an LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_pipelines.html |
396fb874de1b-0 | .ipynb
.pdf
AI21
AI21#
AI21 Studio provides API access to Jurassic-2 large language models.
This example goes over how to use LangChain to interact with AI21 models.
# install the package:
!pip install ai21
# get AI21_API_KEY. Use https://studio.ai21.com/account/account
from getpass import getpass
AI21_API_KEY = getpass()
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21(ai21_api_key=AI21_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
'\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/ai21.html |
c27d7e807dab-0 | .ipynb
.pdf
GooseAI
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI#
GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
from getpass import getpass
GOOSEAI_API_KEY = getpass()
os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
c27d7e807dab-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/gooseai_example.html |
84f4abe9e3b0-0 | .ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale.
This example goes over how to use LangChain to interact with Replicate models
Setup#
To run this notebook, you’ll need to create a replicate account and install the replicate python client.
!pip install replicate
# get a token: https://replicate.com/account
from getpass import getpass
REPLICATE_API_TOKEN = getpass()
import os
os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN
from langchain.llms import Replicate
from langchain import PromptTemplate, LLMChain
Calling a model#
Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version
For example, for this flan-t5 model, click on the API tab. The model name/version would be: daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8
Only the model param is required, but we can add other model params when initializing.
For example, if we were running stable diffusion and wanted to change the image dimensions: | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
84f4abe9e3b0-1 | For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'
We can call any replicate model using this syntax. For example, we can call stable diffusion.
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions': '512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
image_output | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
84f4abe9e3b0-2 | image_output
'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png'
The model spits out a URL. Let’s render it.
from PIL import Image
import requests
from io import BytesIO
response = requests.get(image_output)
img = Image.open(BytesIO(response.content))
img
Chaining Calls#
The whole point of langchain is to… chain! Here’s an example of how do that.
from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")
First prompt in the chain
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
Second prompt to get the logo for company description
second_prompt = PromptTemplate( | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
84f4abe9e3b0-3 | second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a description of a logo for this company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Third prompt, let’s create the image based on the description output from prompt 2
third_prompt = PromptTemplate(
input_variables=["company_logo_description"],
template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
Now let’s run it!
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
84f4abe9e3b0-4 | response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png")
img = Image.open(BytesIO(response.content))
img
previous
PromptLayer OpenAI
next
Runhouse
Contents
Setup
Calling a model
Chaining Calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/replicate.html |
f5f6f3bd0f59-0 | .ipynb
.pdf
Cohere
Cohere#
Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.
This example goes over how to use LangChain to interact with Cohere models.
# Install the package
!pip install cohere
# get a new token: https://dashboard.cohere.ai/
from getpass import getpass
COHERE_API_KEY = getpass()
from langchain.llms import Cohere
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Cohere(cohere_api_key=COHERE_API_KEY)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
f5f6f3bd0f59-1 | llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"
previous
CerebriumAI
next
DeepInfra
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/cohere.html |
33deb14dfc9e-0 | .ipynb
.pdf
OpenAI
OpenAI#
OpenAI offers a spectrum of models with different levels of power suitable for different tasks.
This example goes over how to use LangChain to interact with OpenAI models
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so we are looking for the Super Bowl winner from that year. The Super Bowl in 1994 was Super Bowl XXVIII, and the winner was the Dallas Cowboys.'
previous
NLP Cloud
next
Petals
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/openai.html |
516f05191cbc-0 | .ipynb
.pdf
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI#
The Forefront platform gives you the ability to fine-tune and use open source large language models.
This notebook goes over how to use Langchain with ForefrontAI.
Imports#
import os
from langchain.llms import ForefrontAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.
# get a new token: https://docs.forefront.ai/forefront/api-reference/authentication
from getpass import getpass
FOREFRONTAI_API_KEY = getpass()
os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEY
Create the ForefrontAI instance#
You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.
llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
516f05191cbc-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
DeepInfra
next
GooseAI
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/forefrontai_example.html |
68b860085b7d-0 | .ipynb
.pdf
Petals
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals#
Petals runs 100B+ language models at home, BitTorrent-style.
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
!pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
from getpass import getpass
HUGGINGFACE_API_KEY = getpass()
os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
# this can take several minutes to download big files!
llm = Petals(model_name="bigscience/bloom-petals")
Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
68b860085b7d-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenAI
next
PredictionGuard
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/petals_example.html |
4750b5fcf750-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.
This example goes over how to use LangChain to interact with StochasticAI models.
You have to get the API_KEY and the API_URL here.
from getpass import getpass
STOCHASTICAI_API_KEY = getpass()
import os
os.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEY
YOUR_API_URL = getpass()
from langchain.llms import StochasticAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = StochasticAI(api_url=YOUR_API_URL)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
"\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n"
previous
SageMakerEndpoint
next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
4750b5fcf750-1 | next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/stochasticai.html |
967df1dd5093-0 | .ipynb
.pdf
DeepInfra
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra#
DeepInfra provides several LLMs.
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You have to Login and get a new token.
You are given a 1 hour free of serverless GPU compute to test different models. (see here)
You can print your token with deepctl auth token
# get a new token: https://deepinfra.com/login?from=%2Fdash
from getpass import getpass
DEEPINFRA_API_TOKEN = getpass()
os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (see here)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
967df1dd5093-1 | Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Cohere
next
ForefrontAI
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/deepinfra_example.html |
ae28b60897be-0 | .ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.
This example goes over how to use LangChain to interact with GPT4All models.
%pip install pyllamacpp > /dev/null
Note: you may need to restart the kernel to use updated packages.
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Specify Model#
To run locally, download a compatible ggml-formatted model. For more info, visit https://github.com/nomic-ai/pyllamacpp
For full installation instructions go here.
The GPT4All Chat installer needs to decompress a 3GB LLM model during the installation process!
Note that new models are uploaded regularly - check the link above for the most recent .bin URL
local_path = './models/gpt4all-lora-quantized-ggml.bin' # replace with your desired local file path
Uncomment the below block to download a model. You may want to update url to a new version.
# import requests
# from pathlib import Path
# from tqdm import tqdm | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
ae28b60897be-1 | # from pathlib import Path
# from tqdm import tqdm
# Path(local_path).parent.mkdir(parents=True, exist_ok=True)
# # Example model. Check https://github.com/nomic-ai/pyllamacpp for the latest models.
# url = 'https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized-ggml.bin'
# # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for chunk in tqdm(response.iter_content(chunk_size=8192)):
# if chunk:
# f.write(chunk)
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
previous
GooseAI
next
Hugging Face Hub
Contents
Specify Model
By Harrison Chase | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
ae28b60897be-2 | previous
GooseAI
next
Hugging Face Hub
Contents
Specify Model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html |
1fa9fd90081c-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.
PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard.
This example showcases how to connect to PromptLayer to start recording your OpenAI requests.
Another example is here.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
!pip install promptlayer
Imports#
import os
from langchain.llms import PromptLayerOpenAI
import promptlayer
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
You also need an OpenAI Key, called OPENAI_API_KEY.
from getpass import getpass
PROMPTLAYER_API_KEY = getpass()
os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
1fa9fd90081c-1 | Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
llm = PromptLayerOpenAI(pl_tags=["langchain"])
llm("I am a cat and I want")
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
PredictionGuard
next
Replicate
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/promptlayer_openai.html |
914fbd55e61f-0 | .ipynb
.pdf
NLP Cloud
NLP Cloud#
The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.
This example goes over how to use LangChain to interact with NLP Cloud models.
!pip install nlpcloud
# get a token: https://docs.nlpcloud.com/#authentication
from getpass import getpass
NLPCLOUD_API_KEY = getpass()
import os
os.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEY
from langchain.llms import NLPCloud
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = NLPCloud()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html |
914fbd55e61f-1 | llm_chain.run(question)
' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'
previous
Modal
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/nlpcloud.html |
29de02eb31de-0 | .ipynb
.pdf
Banana
Banana#
Banana is focused on building the machine learning infrastructure.
This example goes over how to use LangChain to interact with Banana models
# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python
!pip install banana-dev
# get new tokens: https://app.banana.dev/
# We need two tokens, not just an `api_key`: `BANANA_API_KEY` and `YOUR_MODEL_KEY`
import os
from getpass import getpass
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
# OR
# BANANA_API_KEY = getpass()
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI
next
CerebriumAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/banana.html |
5a215351e388-0 | .ipynb
.pdf
Runhouse
Runhouse#
The Runhouse allows remote compute and data across environments and users. See the Runhouse docs.
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
Note: Code uses SelfHosted name instead of the Runhouse.
!pip install runhouse
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step.""" | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
5a215351e388-1 | Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
5a215351e388-2 | model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
5a215351e388-3 | llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
Replicate
next
SageMakerEndpoint
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/runhouse.html |
87d0bc046dc2-0 | .ipynb
.pdf
Modal
Modal#
The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
The Modal itself does not provide any LLMs but only the infrastructure.
This example goes over how to use LangChain to interact with Modal.
Here is another example how to use LangChain to interact with Modal.
!pip install modal-client
# register and get a new token
!modal token new
[?25lLaunching login page in your browser window[33m...[0m
[2KIf this is not showing up, please copy this URL into your web browser manually:
[2Km⠙[0m Waiting for authentication in the web browser...
]8;id=417802;https://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1\[4;94mhttps://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1[0m]8;;\
[2K[32m⠙[0m Waiting for authentication in the web browser...
[1A[2K^C
[31mAborted.[0m
Follow these instructions to deal with secrets.
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL") | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
87d0bc046dc2-1 | llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
NLP Cloud
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/modal.html |
2d316aa85a72-0 | .ipynb
.pdf
SageMakerEndpoint
Contents
Set up
Example
SageMakerEndpoint#
Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip3 install langchain boto3
Set up#
You have to set up following required parameters of the SagemakerEndpoint call:
endpoint_name: The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
Example#
from langchain.docstore.document import Document
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [
Document(
page_content=example_doc_1,
)
]
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
2d316aa85a72-1 | from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from langchain.chains.question_answering import load_qa_chain
import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
credentials_profile_name="credentials-profile-name",
region_name="us-west-2",
model_kwargs={"temperature":1e-10},
content_handler=content_handler
),
prompt=PROMPT
) | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
2d316aa85a72-2 | content_handler=content_handler
),
prompt=PROMPT
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
previous
Runhouse
next
StochasticAI
Contents
Set up
Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html |
da9c0f51416c-0 | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with chat models.
How to use few shot examples
How to stream responses
previous
Getting Started
next
How to use few shot examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html |
9708506cabff-0 | .ipynb
.pdf
Getting Started
Contents
PromptTemplates
LLMChain
Streaming
Getting Started#
This notebook covers how to get started with chat models. The interface is based around messages rather than raw text.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
AIMessage(content="J'aime programmer.", additional_kwargs={})
OpenAI’s chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages) | /content/https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
9708506cabff-1 | ]
chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
You can recover things like token usage from this LLMResult
result.llm_output
{'token_usage': {'prompt_tokens': 71,
'completion_tokens': 18,
'total_tokens': 89}} | /content/https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
9708506cabff-2 | 'total_tokens': 89}}
PromptTemplates#
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
AIMessage(content="J'adore la programmation.", additional_kwargs={})
If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg: | /content/https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
9708506cabff-3 | prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
LLMChain#
You can use the existing LLMChain in a very similar way to before - provide a prompt and a model.
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
"J'adore la programmation."
Streaming#
Streaming is supported for ChatOpenAI through callback handling.
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist | /content/https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
9708506cabff-4 | No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
Chat Models
next
How-To Guides
Contents
PromptTemplates
LLMChain
Streaming
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/getting_started.html |
cd3784a6f917-0 | .rst
.pdf
Integrations
Integrations#
The examples here all highlight how to integrate with different chat models.
Anthropic
Azure
OpenAI
PromptLayer ChatOpenAI
previous
How to stream responses
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/integrations.html |
d6225e098600-0 | .ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro: | /content/https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
d6225e098600-1 | Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
How to use few shot examples
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html |
ec68f5406b2e-0 | .ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions.
Alternating Human/AI messages#
The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) | /content/https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
ec68f5406b2e-1 | chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.")
"I be lovin' programmin', me hearty!"
System Messages#
OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below.
template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"})
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.") | /content/https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
ec68f5406b2e-2 | chain.run("I love programming.")
"I be lovin' programmin', me hearty."
previous
How-To Guides
next
How to stream responses
Contents
Alternating Human/AI messages
System Messages
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html |
8986bbbd4760-0 | .ipynb
.pdf
PromptLayer ChatOpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer ChatOpenAI#
This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
pip install promptlayer
Imports#
import os
from langchain.chat_models import PromptLayerChatOpenAI
from langchain.schema import HumanMessage
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "**********"
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])
chat([HumanMessage(content="I am a cat and I want")]) | /content/https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
8986bbbd4760-1 | chat([HumanMessage(content="I am a cat and I want")])
AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
chat = PromptLayerChatOpenAI(return_pl_id=True)
chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])
for res in chat_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
OpenAI
next
Text Embedding Models | /content/https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
8986bbbd4760-2 | previous
OpenAI
next
Text Embedding Models
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html |
846d1ef53301-0 | .ipynb
.pdf
OpenAI
OpenAI#
This notebook covers how to get started with OpenAI chat models.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
AIMessage(content="J'aime programmer.", additional_kwargs={})
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}" | /content/https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html |
Subsets and Splits