id
stringlengths 14
16
| text
stringlengths 45
2.05k
| source
stringlengths 53
111
|
---|---|---|
626c1c01d01e-0 | .ipynb
.pdf
Redis
Redis#
This notebook shows how to use functionality related to the Redis database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.redis import Redis
from langchain.document_loaders import TextLoader
loader = TextLoader('../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link')
rds.index_name
'b564189668a343648996bd5a1d353d4e'
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\indexes\\vectorstore_examples\\redis.html" |
626c1c01d01e-1 | And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
print(rds.add_texts(["Ankush went to Princeton"]))
['doc:333eadf75bd74be393acafa8bca48669']
query = "Princeton"
results = rds.similarity_search(query)
print(results[0].page_content)
Ankush went to Princeton
#Query
rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link')
query = "What did the president say about Ketanji Brown Jackson"
results = rds.similarity_search(query)
print(results[0].page_content)
previous
Qdrant
next
Weaviate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\indexes\\vectorstore_examples\\redis.html" |
385281fef537-0 | .ipynb
.pdf
Weaviate
Weaviate#
This notebook shows how to use functionality related to the Weaviate vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import weaviate
import os
WEAVIATE_URL = ""
client = weaviate.Client(
url=WEAVIATE_URL,
additional_headers={
'X-OpenAI-Api-Key': os.environ["OPENAI_API_KEY"]
}
)
client.schema.delete_all()
client.schema.get()
schema = {
"classes": [
{
"class": "Paragraph",
"description": "A written paragraph",
"vectorizer": "text2vec-openai",
"moduleConfig": {
"text2vec-openai": {
"model": "babbage",
"type": "text"
}
},
"properties": [
{
"dataType": ["text"],
"description": "The content of the paragraph",
"moduleConfig": {
"text2vec-openai": {
"skip": False,
"vectorizePropertyName": False
}
},
"name": "content",
},
],
},
]
}
client.schema.create(schema) | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\indexes\\vectorstore_examples\\weaviate.html" |
385281fef537-1 | },
],
},
]
}
client.schema.create(schema)
vectorstore = Weaviate(client, "Paragraph", "content")
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].page_content)
previous
Redis
next
ChatGPT Plugin Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\indexes\\vectorstore_examples\\weaviate.html" |
9b3b1070c899-0 | .ipynb
.pdf
Async API for LLM
Async API for LLM#
LangChain provides async support for LLMs by leveraging the asyncio library.
Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, only OpenAI and PromptLayerOpenAI are supported, but async support for other LLMs is on the roadmap.
You can use the agenerate method to call an OpenAI LLM asynchronously.
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
I'm doing well, thank you. How about you? | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\async_llm.html" |
9b3b1070c899-1 | I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
previous
Writer
next
Streaming with LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\async_llm.html" |
a3472ba57c4b-0 | .rst
.pdf
Generic Functionality
Generic Functionality#
The examples here all address certain “how-to” guides for working with LLMs.
LLM Serialization: A walkthrough of how to serialize LLMs to and from disk.
LLM Caching: Covers different types of caches, and how to use a cache to save results of LLM calls.
Custom LLM: How to create and use a custom LLM class, in case you have an LLM not from one of the standard providers (including one that you host yourself).
Token Usage Tracking: How to track the token usage of various chains/agents/LLM calls.
Fake LLM: How to create and use a fake LLM for testing and debugging purposes.
previous
How-To Guides
next
Custom LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\generic_how_to.html" |
ef9b78eb3f1c-0 | .ipynb
.pdf
Getting Started
Getting Started#
This notebook goes over how to use the LLM class in LangChain.
The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section.
For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string.
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'),
Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1] | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\getting_started.html" |
ef9b78eb3f1c-1 | llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
You can also access provider specific information that is returned. This information is NOT standardized across providers.
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.
Notice that by default the tokens are estimated using a HuggingFace tokenizer.
llm.get_num_tokens("what a joke")
3
previous
LLMs
next
Key Concepts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\getting_started.html" |
4af397fe85aa-0 | .rst
.pdf
How-To Guides
How-To Guides#
The examples here all address certain “how-to” guides for working with LLMs.
They are split into two categories:
Generic Functionality: Covering generic functionality all LLMs should have.
Integrations: Covering integrations with various LLM providers.
Asynchronous: Covering asynchronous functionality.
Streaming: Covering streaming functionality.
previous
Key Concepts
next
Generic Functionality
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\how_to_guides.html" |
21db1756355c-0 | .rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
OpenAI: Covers how to connect to OpenAI models.
Cohere: Covers how to connect to Cohere models.
AI21: Covers how to connect to AI21 models.
Huggingface Hub: Covers how to connect to LLMs hosted on HuggingFace Hub.
Azure OpenAI: Covers how to connect to Azure-hosted OpenAI Models.
Manifest: Covers how to utilize the Manifest wrapper.
Goose AI: Covers how to utilize the Goose AI wrapper.
Writer: Covers how to utilize the Writer wrapper.
Banana: Covers how to utilize the Banana wrapper.
Modal: Covers how to utilize the Modal wrapper.
StochasticAI: Covers how to utilize the Stochastic AI wrapper.
Cerebrium: Covers how to utilize the Cerebrium AI wrapper.
Petals: Covers how to utilize the Petals wrapper.
Forefront AI: Covers how to utilize the Forefront AI wrapper.
PromptLayer OpenAI: Covers how to use PromptLayer with LangChain.
Anthropic: Covers how to use Anthropic models with LangChain.
DeepInfra: Covers how to utilize the DeepInfra wrapper.
Self-Hosted Models (via Runhouse): Covers how to run models on existing or on-demand remote compute with LangChain.
previous
Token Usage Tracking
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations.html" |
ddf04e017df2-0 | .md
.pdf
Key Concepts
Contents
LLMs
Generation
LLMResult
Key Concepts#
LLMs#
Wrappers around Large Language Models (in particular, the “generate” ability of large language models) are at the core of LangChain functionality.
The core method that these classes expose is a generate method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings). Read more about LLMResult.
This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider, providing speed and efficiency gains.
For convenience, this class also exposes a simpler, more user friendly interface (via __call__).
The interface for this takes in a single string, and returns a single string.
Generation#
The output of a single generation. Currently in LangChain this is just the generated text, although could be extended in the future
to contain log probs or the like.
LLMResult#
The full output of a call to the generate method of the LLM class.
Since the generate method takes as input a list of strings, this returns a list of results.
Each result consists of a list of generations (since you can request N generations per input string).
This also contains a llm_output attribute which contains provider-specific information about the call.
previous
Getting Started
next
How-To Guides
Contents
LLMs
Generation
LLMResult
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\key_concepts.html" |
440d457d78ac-0 | .ipynb
.pdf
Streaming with LLMs
Streaming with LLMs#
LangChain provides streaming support for LLMs. Currently, we only support streaming for the OpenAI and ChatOpenAI LLM implementation, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler.
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water, | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\streaming_llm.html" |
440d457d78ac-1 | On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming.
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}})
Here’s an example with ChatOpenAI:
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\streaming_llm.html" |
440d457d78ac-2 | Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
Async API for LLM
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\streaming_llm.html" |
09a886d9eb28-0 | .ipynb
.pdf
Custom LLM
Custom LLM#
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.
There is only one required thing that a custom LLM needs to implement:
A _call method that takes in a string, some optional stop words, and returns a string
There is a second optional thing it can implement:
An _identifying_params property that is used to help with printing of this class. Should return a dictionary.
Let’s implement a very simple custom LLM that just returns the first N characters of the input.
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
We can now use this as an any other LLM.
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
We can also print the LLM and see its custom print.
print(llm)
CustomLLM
Params: {'n': 10}
previous
Generic Functionality
next
Fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\custom_llm.html" |
09a886d9eb28-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\custom_llm.html" |
5201747ff8fd-0 | .ipynb
.pdf
Fake LLM
Fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
Custom LLM
next
LLM Caching
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\fake_llm.html" |
ac133e85915c-0 | .ipynb
.pdf
LLM Caching
Contents
In Memory Cache
SQLite Cache
Redis Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
LLM Caching#
This notebook covers how to cache results of individual LLM calls.
from langchain.llms import OpenAI
In Memory Cache#
import langchain
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 30.7 ms, sys: 18.6 ms, total: 49.3 ms
Wall time: 791 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 80 µs, sys: 0 ns, total: 80 µs
Wall time: 83.9 µs
"\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke") | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_caching.html" |
ac133e85915c-1 | llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_caching.html" |
ac133e85915c-2 | from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
llm("Tell me a joke") | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_caching.html" |
ac133e85915c-3 | %%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_caching.html" |
ac133e85915c-4 | Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
previous
Fake LLM
next
LLM Serialization
Contents
In Memory Cache
SQLite Cache
Redis Cache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_caching.html" |
3308a3b20b03-0 | .ipynb
.pdf
LLM Serialization
Contents
Loading
Saving
LLM Serialization#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading a LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from a LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml.
llm.save("llm.json") | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_serialization.html" |
3308a3b20b03-1 | llm.save("llm.json")
llm.save("llm.yaml")
previous
LLM Caching
next
Token Usage Tracking
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\llm_serialization.html" |
0c1eb0d9844f-0 | .ipynb
.pdf
Token Usage Tracking
Token Usage Tracking#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb.total_tokens)
42
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
83
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(cb.total_tokens)
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\token_usage_tracking.html" |
0c1eb0d9844f-1 | Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Jason Sudeikis
Thought: I need to find out Jason Sudeikis' age
Action: Search
Action Input: "Jason Sudeikis age"
Observation: 47 years
Thought: I need to calculate 47 raised to the 0.23 power
Action: Calculator
Action Input: 47^0.23
Observation: Answer: 2.4242784855673896
Thought: I now know the final answer
Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896.
> Finished chain.
1465
previous
LLM Serialization
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\examples\\token_usage_tracking.html" |
6c5fa44c18c9-0 | .ipynb
.pdf
AI21
AI21#
This example goes over how to use LangChain to interact with AI21 models
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\ai21.html" |
0ad660d74cb1-0 | .ipynb
.pdf
Aleph Alpha
Aleph Alpha#
This example goes over how to use LangChain to interact with Aleph Alpha models
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\aleph_alpha.html" |
dc7c1a26ae5c-0 | .ipynb
.pdf
Anthropic
Anthropic#
This example goes over how to use LangChain to interact with Anthropic models
from langchain.llms import Anthropic
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Anthropic()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
" Step 1: Justin Beiber was born on March 1, 1994\nStep 2: The NFL season ends with the Super Bowl in January/February\nStep 3: Therefore, the Super Bowl that occurred closest to Justin Beiber's birth would be Super Bowl XXIX in 1995\nStep 4: The San Francisco 49ers won Super Bowl XXIX in 1995\n\nTherefore, the answer is the San Francisco 49ers won the Super Bowl in the year Justin Beiber was born."
previous
Aleph Alpha
next
Azure OpenAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\anthropic_example.html" |
c2db813190c5-0 | .ipynb
.pdf
Azure OpenAI LLM Example
Contents
API configuration
Deployments
Azure OpenAI LLM Example#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create( | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\azure_openai_example.html" |
c2db813190c5-1 | import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002")
# Run the LLM
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anthropic
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\azure_openai_example.html" |
49dd9c173523-0 | .ipynb
.pdf
Banana
Banana#
This example goes over how to use LangChain to interact with Banana models
import os
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI LLM Example
next
CerebriumAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\banana.html" |
68f88bf122eb-0 | .ipynb
.pdf
CerebriumAI LLM Example
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI LLM Example#
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
$ pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Banana
next | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\cerebriumai_example.html" |
68f88bf122eb-1 | llm_chain.run(question)
previous
Banana
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\cerebriumai_example.html" |
5360df4d1340-0 | .ipynb
.pdf
Cohere
Cohere#
This example goes over how to use LangChain to interact with Cohere models
from langchain.llms import Cohere
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Cohere()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question) | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\cohere.html" |
5360df4d1340-1 | llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"
previous
CerebriumAI LLM Example
next
DeepInfra LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\cohere.html" |
93f46c73b250-0 | .ipynb
.pdf
DeepInfra LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra LLM Example#
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You are given a 1 hour free of serverless GPU compute to test different models.
You can print your token with deepctl auth token
os.environ["DEEPINFRA_API_TOKEN"] = "YOUR_KEY_HERE"
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (for example)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Cohere
next
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\deepinfra_example.html" |
93f46c73b250-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\deepinfra_example.html" |
b1c08d7e7322-0 | .ipynb
.pdf
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI LLM Example#
This notebook goes over how to use Langchain with ForefrontAI.
Imports#
import os
from langchain.llms import ForefrontAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.
os.environ["FOREFRONTAI_API_KEY"] = "YOUR_KEY_HERE"
Create the ForefrontAI instance#
You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.
llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
DeepInfra LLM Example
next
GooseAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\forefrontai_example.html" |
b1c08d7e7322-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\forefrontai_example.html" |
1de4a3628d24-0 | .ipynb
.pdf
GooseAI LLM Example
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI LLM Example#
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
os.environ["GOOSEAI_API_KEY"] = "YOUR_KEY_HERE"
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI LLM Example
next
Hugging Face Hub
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\gooseai_example.html" |
1de4a3628d24-1 | Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\gooseai_example.html" |
7ebd71e88b54-0 | .ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
This example showcases how to connect to the Hugging Face Hub.
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64}))
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
print(llm_chain.run(question))
The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks.
previous
GooseAI LLM Example
next
Manifest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\huggingface_hub.html" |
955d433e9e5a-0 | .ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
{'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B'}
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union) | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\manifest.html" |
955d433e9e5a-1 | state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper( | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\manifest.html" |
955d433e9e5a-2 | )
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Hugging Face Hub
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\manifest.html" |
ade9470a62b7-0 | .ipynb
.pdf
Modal
Modal#
This example goes over how to use LangChain to interact with Modal models
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\modal.html" |
11496fe16835-0 | .ipynb
.pdf
OpenAI
OpenAI#
This example goes over how to use LangChain to interact with OpenAI models
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in that year was the Dallas Cowboys.'
previous
Modal
next
Petals LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\openai.html" |
784db6e358d9-0 | .ipynb
.pdf
Petals LLM Example
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals LLM Example#
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
$ pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
os.environ["HUGGINGFACE_API_KEY"] = "YOUR_KEY_HERE"
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
llm = Petals(model_name="bigscience/bloom-petals")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenAI
next
PromptLayer OpenAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\petals_example.html" |
784db6e358d9-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\petals_example.html" |
2edc30018af0-0 | .ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
This example showcases how to connect to PromptLayer to start recording your OpenAI requests.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
pip install promptlayer
Imports#
import os
from langchain.llms import PromptLayerOpenAI
import promptlayer
Set the Environment API Key#
You can create a PromptLayer API Key at wwww.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "********"
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
llm = PromptLayerOpenAI(pl_tags=["langchain"])
llm("I am a cat and I want")
' to go outside\n\nUnfortunately, cats cannot go outside without being supervised by a human. Going outside can be dangerous for cats, as they may come into contact with cars, other animals, or other dangers. If you want to go outside, ask your human to take you on a supervised walk or to a safe, enclosed outdoor space.'
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"]) | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\promptlayer_openai.html" |
2edc30018af0-1 | llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
Petals LLM Example
next
SageMakerEndpoint
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\promptlayer_openai.html" |
47ca4d66dfd3-0 | .ipynb
.pdf
SageMakerEndpoint
SageMakerEndpoint#
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip3 install langchain boto3
from langchain.docstore.document import Document
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [
Document(
page_content=example_doc_1,
)
]
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from langchain.chains.question_answering import load_qa_chain
import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
content_handler = ContentHandler() | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\sagemaker.html" |
47ca4d66dfd3-1 | return response_json[0]["generated_text"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
credentials_profile_name="credentials-profile-name",
region_name="us-west-2",
model_kwargs={"temperature":1e-10},
content_handler=content_handler
),
prompt=PROMPT
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
previous
PromptLayer OpenAI
next
Self-Hosted Models via Runhouse
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\sagemaker.html" |
a119a6d7adc2-0 | .ipynb
.pdf
Self-Hosted Models via Runhouse
Self-Hosted Models via Runhouse#
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
For more information, see Runhouse or the Runhouse docs.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\self_hosted_examples.html" |
a119a6d7adc2-1 | INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\self_hosted_examples.html" |
a119a6d7adc2-2 | INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
SageMakerEndpoint
next
StochasticAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\self_hosted_examples.html" |
daf49ec83c73-0 | .ipynb
.pdf
StochasticAI
StochasticAI#
This example goes over how to use LangChain to interact with StochasticAI models
from langchain.llms import StochasticAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = StochasticAI(api_url="YOUR_API_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Self-Hosted Models via Runhouse
next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\stochasticai.html" |
b170d7be02dc-0 | .ipynb
.pdf
Writer
Writer#
This example goes over how to use LangChain to interact with Writer models
from langchain.llms import Writer
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Writer()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
StochasticAI
next
Async API for LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\llms\\integrations\\writer.html" |
74ece69e5f7f-0 | .ipynb
.pdf
Getting Started
Contents
ChatMessageHistory
ConversationBufferMemory
Using in a chain
Saving Message History
Getting Started#
This notebook walks through how LangChain thinks about memory.
Memory involves keeping a concept of state around throughout a user’s interactions with an language model. A user’s interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.
In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain.
Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.
In this notebook, we will walk through the simplest form of memory: “buffer” memory, which just involves keeping a buffer of all prior messages. We will show how to use the modular utility functions here, then show how it can be used in a chain (both returning a string as well as a list of messages).
ChatMessageHistory#
One of the core utility classes underpinning most (if not all) memory modules is the ChatMessageHistory class. This is a super lightweight wrapper which exposes convienence methods for saving Human messages, AI messages, and then fetching them all.
You may want to use this class directly if you are managing memory outside of a chain.
from langchain.memory import ChatMessageHistory
history = ChatMessageHistory()
history.add_user_message("hi!")
history.add_ai_message("whats up?")
history.messages
[HumanMessage(content='hi!', additional_kwargs={}), | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\getting_started.html" |
74ece69e5f7f-1 | history.messages
[HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]
ConversationBufferMemory#
We now show how to use this simple concept in a chain. We first showcase ConversationBufferMemory which is just a wrapper around ChatMessageHistory that extracts the messages in a variable.
We can first extract it as a string.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("whats up?")
memory.load_memory_variables({})
{'history': 'Human: hi!\nAI: whats up?'}
We can also get the history as a list of messages
memory = ConversationBufferMemory(return_messages=True)
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("whats up?")
memory.load_memory_variables({})
{'history': [HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]}
Using in a chain#
Finally, let’s take a look at using this in a chain (setting verbose=True so we can see the prompt).
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
conversation.predict(input="Hi there!")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\getting_started.html" |
74ece69e5f7f-2 | Current conversation:
Human: Hi there!
AI:
> Finished chain.
" Hi there! It's nice to meet you. How can I help you today?"
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?"
conversation.predict(input="Tell me about yourself.")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: I'm doing well! Just having a conversation with an AI.
AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?
Human: Tell me about yourself.
AI:
> Finished chain. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\getting_started.html" |
74ece69e5f7f-3 | Human: Tell me about yourself.
AI:
> Finished chain.
" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers."
Saving Message History#
You may often to save messages, and then load them to use again. This can be done easily by first converting the messages to normal python dictionaries, saving those (as json or something) and then loading those. Here is an example of doing that.
import json
from langchain.memory import ChatMessageHistory
from langchain.schema import messages_from_dict, messages_to_dict
history = ChatMessageHistory()
history.add_user_message("hi!")
history.add_ai_message("whats up?")
dicts = messages_to_dict(history.messages)
dicts
[{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}},
{'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}]
new_messages = messages_from_dict(dicts)
new_messages
[HumanMessage(content='hi!', additional_kwargs={}),
AIMessage(content='whats up?', additional_kwargs={})]
And that’s it for the getting started! There are plenty of different types of memory, check out our examples to see them all
previous
Memory
next
Key Concepts
Contents
ChatMessageHistory
ConversationBufferMemory
Using in a chain
Saving Message History
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\getting_started.html" |
945cfe372921-0 | .rst
.pdf
How-To Guides
Contents
Types
Usage
How-To Guides#
Types#
The first set of examples all highlight different types of memory.
Buffer: How to use a type of memory that just keeps previous messages in a buffer.
Buffer Window: How to use a type of memory that keeps previous messages in a buffer but only uses the previous k of them.
Summary: How to use a type of memory that summarizes previous messages.
Summary Buffer: How to use a type of memory that keeps a buffer of messages up to a point, and then summarizes them.
Entity Memory: How to use a type of memory that organizes information by entity.
Knowledge Graph Memory: How to use a type of memory that extracts and organizes information in a knowledge graph
Usage#
The examples here all highlight how to use memory in different ways.
Adding Memory: How to add a memory component to any single input chain.
ChatGPT Clone: How to recreate ChatGPT with LangChain prompting + memory components.
Adding Memory to Multi-Input Chain: How to add a memory component to any multiple input chain.
Conversational Memory Customization: How to customize existing conversation memory components.
Custom Memory: How to write your own custom memory component.
Adding Memory to Agents: How to add a memory component to any agent.
Conversation Agent: Example of a conversation agent, which combines memory with agents and a conversation focused prompt.
Multiple Memory: How to use multiple types of memory in the same chain.
previous
Key Concepts
next
ConversationBufferMemory
Contents
Types
Usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\how_to_guides.html" |
b879a72b5e2e-0 | .md
.pdf
Key Concepts
Contents
Memory
Conversational Memory
Entity Memory
Key Concepts#
Memory#
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
both at a short term but also at a long term level. The concept of “Memory” exists to do exactly that.
Conversational Memory#
One of the simpler forms of memory occurs in chatbots, where they remember previous conversations.
There are a few different ways to accomplish this:
Buffer: This is just passing in the past N interactions in as context. N can be chosen based on a fixed number, the length of the interactions, or other!
Summary: This involves summarizing previous conversations and passing that summary in, instead of the raw dialogue itself. Compared to Buffer, this compresses information: meaning it is more lossy, but also less likely to run into context length limits.
Combination: A combination of the above two approaches, where you compute a summary but also pass in some previous interactions directly!
Entity Memory#
A more complex form of memory is remembering information about specific entities in the conversation.
This is a more direct and organized way of remembering information over time.
Putting it a more structured form also has the benefit of allowing easy inspection of what is known about specific entities.
For a guide on how to use this type of memory, see this notebook.
previous
Getting Started
next
How-To Guides
Contents
Memory
Conversational Memory
Entity Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\key_concepts.html" |
1c5ab3e19156-0 | .ipynb
.pdf
Adding Memory To an LLMChain
Adding Memory To an LLMChain#
This notebook goes over how to use the Memory class with an LLMChain. For the purposes of this walkthrough, we will add the ConversationBufferMemory class, although this can be any memory class.
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI, LLMChain, PromptTemplate
The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the PromptTemplate and the ConversationBufferMemory match up (chat_history).
template = """You are a chatbot having a conversation with a human.
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(
llm=OpenAI(),
prompt=prompt,
verbose=True,
memory=memory,
)
llm_chain.predict(human_input="Hi there my friend")
> Entering new LLMChain chain...
Prompt after formatting:
You are a chatbot having a conversation with a human.
Human: Hi there my friend
Chatbot:
> Finished LLMChain chain.
' Hi there, how are you doing today?'
llm_chain.predict(human_input="Not to bad - how are you?")
> Entering new LLMChain chain...
Prompt after formatting:
You are a chatbot having a conversation with a human.
Human: Hi there my friend
AI: Hi there, how are you doing today? | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\adding_memory.html" |
1c5ab3e19156-1 | Human: Hi there my friend
AI: Hi there, how are you doing today?
Human: Not to bad - how are you?
Chatbot:
> Finished LLMChain chain.
" I'm doing great, thank you for asking!"
previous
ConversationTokenBufferMemory
next
Adding Memory to a Multi-Input Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\adding_memory.html" |
f851e1ab4208-0 | .ipynb
.pdf
Adding Memory to a Multi-Input Chain
Adding Memory to a Multi-Input Chain#
Most memory objects assume a single output. In this notebook, we go over how to add memory to a chain that has multiple outputs. As an example of such a chain, we will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.similarity_search(query)
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
template = """You are a chatbot having a conversation with a human.
Given the following extracted parts of a long document and a question, create a final answer.
{context}
{chat_history}
Human: {human_input}
Chatbot:""" | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\adding_memory_chain_multiple_inputs.html" |
f851e1ab4208-1 | {context}
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input", "context"],
template=template
)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "human_input": query}, return_only_outputs=True)
{'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'}
print(chain.memory.buffer)
Human: What did the president say about Justice Breyer
AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
previous
Adding Memory To an LLMChain
next
Adding Memory to an Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\adding_memory_chain_multiple_inputs.html" |
db388fa2c42c-0 | .ipynb
.pdf
Adding Memory to an Agent
Adding Memory to an Agent#
This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:
Adding memory to an LLM Chain
Custom Agents
In order to add a memory to an agent we are going to the the following steps:
We are going to create an LLMChain with memory.
We are going to use that LLMChain to create a custom Agent.
For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the ConversationBufferMemory class.
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI, LLMChain
from langchain.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
Notice the usage of the chat_history variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory.
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"]
)
memory = ConversationBufferMemory(memory_key="chat_history") | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-1 | )
memory = ConversationBufferMemory(memory_key="chat_history")
We can now construct the LLMChain, with the Memory object, and then create the agent.
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
agent_chain.run(input="How many people live in canada?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada
Action: Search
Action Input: Population of Canada | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-2 | Action: Search
Action Input: Population of Canada
Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time.
Thought: I now know the final answer
Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.
> Finished AgentExecutor chain. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-3 | > Finished AgentExecutor chain.
'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'
To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly.
agent_chain.run(input="what is their national anthem called?")
> Entering new AgentExecutor chain...
Thought: I need to find out what the national anthem of Canada is called.
Action: Search
Action Input: National Anthem of Canada | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-4 | Action: Search
Action Input: National Anthem of Canada
Observation: Jun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! "O Canada" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... "O Canada" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ... | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-5 | Thought: I now know the final answer.
Final Answer: The national anthem of Canada is called "O Canada".
> Finished AgentExecutor chain.
'The national anthem of Canada is called "O Canada".'
We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada’s national anthem was.
For fun, let’s compare this to an agent that does NOT have memory.
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"]
)
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_without_memory = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_without_memory.run("How many people live in canada?")
> Entering new AgentExecutor chain...
Thought: I need to find out the population of Canada
Action: Search
Action Input: Population of Canada | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-6 | Action: Search
Action Input: Population of Canada
Observation: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time.
Thought: I now know the final answer
Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.
> Finished AgentExecutor chain. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-7 | > Finished AgentExecutor chain.
'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'
agent_without_memory.run("what is their national anthem called?")
> Entering new AgentExecutor chain...
Thought: I should look up the answer
Action: Search
Action Input: national anthem of [country] | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-8 | Action: Search
Action Input: national anthem of [country]
Observation: Most nation states have an anthem, defined as "a song, as of praise, devotion, or patriotism"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, "Milli Surood" (National Anthem) · 2. Armenia, "Mer Hayrenik" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of "The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
db388fa2c42c-9 | Thought: I now know the final answer
Final Answer: The national anthem of [country] is [name of anthem].
> Finished AgentExecutor chain.
'The national anthem of [country] is [name of anthem].'
previous
Adding Memory to a Multi-Input Chain
next
ChatGPT Clone
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\agent_with_memory.html" |
53969b1d80d1-0 | .ipynb
.pdf
ChatGPT Clone
ChatGPT Clone#
This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.
Shows off the example as in https://www.engraved.blog/building-a-virtual-machine-inside/
from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
{history}
Human: {human_input}
Assistant:"""
prompt = PromptTemplate(
input_variables=["history", "human_input"],
template=template
)
chatgpt_chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True, | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-1 | prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(k=2),
)
output = chatgpt_chain.predict(human_input="I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-2 | Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.
Assistant:
> Finished chain.
```
/home/user
```
output = chatgpt_chain.predict(human_input="ls ~")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-3 | Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.
AI:
```
$ pwd
/
```
Human: ls ~
Assistant:
> Finished LLMChain chain.
```
$ ls ~
Desktop Documents Downloads Music Pictures Public Templates Videos
```
output = chatgpt_chain.predict(human_input="cd ~")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-4 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.
AI:
```
$ pwd
/
```
Human: ls ~
AI:
```
$ ls ~
Desktop Documents Downloads Music Pictures Public Templates Videos
```
Human: cd ~
Assistant:
> Finished LLMChain chain.
```
$ cd ~
$ pwd
/home/user
```
output = chatgpt_chain.predict(human_input="{Please make a file jokes.txt inside and put some jokes inside}")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-5 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: ls ~
AI:
```
$ ls ~
Desktop Documents Downloads Music Pictures Public Templates Videos
```
Human: cd ~
AI:
```
$ cd ~
$ pwd
/home/user
```
Human: {Please make a file jokes.txt inside and put some jokes inside}
Assistant:
> Finished LLMChain chain.
```
$ touch jokes.txt
$ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt
$ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt
$ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt
```
output = chatgpt_chain.predict(human_input="""echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py""")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-6 | Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: cd ~
AI:
```
$ cd ~
$ pwd
/home/user
```
Human: {Please make a file jokes.txt inside and put some jokes inside}
AI:
```
$ touch jokes.txt
$ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt
$ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt
$ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt
```
Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py
Assistant:
> Finished LLMChain chain.
``` | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-7 | Assistant:
> Finished LLMChain chain.
```
$ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py
$ python3 run.py
Result: 33
```
output = chatgpt_chain.predict(human_input="""echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py""")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: {Please make a file jokes.txt inside and put some jokes inside}
AI:
```
$ touch jokes.txt | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-8 | AI:
```
$ touch jokes.txt
$ echo "Why did the chicken cross the road? To get to the other side!" >> jokes.txt
$ echo "What did the fish say when it hit the wall? Dam!" >> jokes.txt
$ echo "Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!" >> jokes.txt
```
Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py
AI:
```
$ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py
$ python3 run.py
Result: 33
```
Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py
Assistant:
> Finished LLMChain chain.
```
$ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py
$ python3 run.py
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
```
docker_input = """echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04\nCOPY entrypoint.sh entrypoint.sh\nENTRYPOINT [\"/bin/sh\",\"entrypoint.sh\"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image"""
output = chatgpt_chain.predict(human_input=docker_input)
print(output)
> Entering new LLMChain chain... | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-9 | print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py && python3 run.py
AI:
```
$ echo -e "x=lambda y:y*5+3;print('Result:' + str(x(6)))" > run.py
$ python3 run.py
Result: 33
```
Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py
AI:
``` | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-10 | AI:
```
$ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py
$ python3 run.py
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
```
Human: echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image
Assistant:
> Finished LLMChain chain.
```
$ echo -e "echo 'Hello from Docker" > entrypoint.sh
$ echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile
$ docker build . -t my_docker_image
$ docker run -t my_docker_image
Hello from Docker
```
output = chatgpt_chain.predict(human_input="nvidia-smi")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-11 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py && python3 run.py
AI:
```
$ echo -e "print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])" > run.py
$ python3 run.py
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
```
Human: echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image
AI:
```
$ echo -e "echo 'Hello from Docker" > entrypoint.sh
$ echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-12 | ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile
$ docker build . -t my_docker_image
$ docker run -t my_docker_image
Hello from Docker
```
Human: nvidia-smi
Assistant:
> Finished LLMChain chain.
```
$ nvidia-smi
Sat May 15 21:45:02 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 511MiB / 10206MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
output = chatgpt_chain.predict(human_input="ping bbc.com")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-13 | Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: echo -e "echo 'Hello from Docker" > entrypoint.sh && echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image
AI:
```
$ echo -e "echo 'Hello from Docker" > entrypoint.sh
$ echo -e "FROM ubuntu:20.04
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT ["/bin/sh","entrypoint.sh"]">Dockerfile
$ docker build . -t my_docker_image
$ docker run -t my_docker_image
Hello from Docker
```
Human: nvidia-smi
AI:
``` | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-14 | Hello from Docker
```
Human: nvidia-smi
AI:
```
$ nvidia-smi
Sat May 15 21:45:02 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 511MiB / 10206MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
Human: ping bbc.com
Assistant:
> Finished LLMChain chain.
```
$ ping bbc.com
PING bbc.com (151.101.65.81): 56 data bytes
64 bytes from 151.101.65.81: icmp_seq=0 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=1 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=2 ttl=53 time=14.945 ms
--- bbc.com ping statistics --- | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-15 | --- bbc.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 14.945/14.945/14.945/0.000 ms
```
output = chatgpt_chain.predict(human_input="""curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'""")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: nvidia-smi
AI:
```
$ nvidia-smi
Sat May 15 21:45:02 2021
+-----------------------------------------------------------------------------+ | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-16 | Sat May 15 21:45:02 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 511MiB / 10206MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
Human: ping bbc.com
AI:
```
$ ping bbc.com
PING bbc.com (151.101.65.81): 56 data bytes
64 bytes from 151.101.65.81: icmp_seq=0 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=1 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=2 ttl=53 time=14.945 ms
--- bbc.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 14.945/14.945/14.945/0.000 ms | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-17 | ```
Human: curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
Assistant:
> Finished LLMChain chain.
```
$ curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
1.8.1
```
output = chatgpt_chain.predict(human_input="lynx https://www.deepmind.com/careers")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: ping bbc.com
AI:
```
$ ping bbc.com | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-18 | Human: ping bbc.com
AI:
```
$ ping bbc.com
PING bbc.com (151.101.65.81): 56 data bytes
64 bytes from 151.101.65.81: icmp_seq=0 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=1 ttl=53 time=14.945 ms
64 bytes from 151.101.65.81: icmp_seq=2 ttl=53 time=14.945 ms
--- bbc.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 14.945/14.945/14.945/0.000 ms
```
Human: curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
AI:
```
$ curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
1.8.1
```
Human: lynx https://www.deepmind.com/careers
Assistant:
> Finished LLMChain chain.
```
$ lynx https://www.deepmind.com/careers
DeepMind Careers
Welcome to DeepMind Careers. We are a world-leading artificial intelligence research and development company, and we are looking for talented people to join our team.
We offer a range of exciting opportunities in research, engineering, product, and operations. Our mission is to solve intelligence and make it useful, and we are looking for people who share our passion for pushing the boundaries of AI. | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
53969b1d80d1-19 | Explore our current openings and apply today. We look forward to hearing from you.
```
output = chatgpt_chain.predict(human_input="curl https://chat.openai.com/chat")
print(output)
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
AI:
```
$ curl -fsSL "https://api.github.com/repos/pytorch/pytorch/releases/latest" | jq -r '.tag_name' | sed 's/[^0-9\.\-]*//g'
1.8.1
```
Human: lynx https://www.deepmind.com/careers | ERROR: type should be string, got "https://langchain.readthedocs.io\\en\\latest\\modules\\memory\\examples\\chatgpt_clone.html" |
Subsets and Splits