id
stringlengths 14
16
| text
stringlengths 31
3.14k
| source
stringlengths 58
124
|
---|---|---|
09c7c6e2de63-47 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-48 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceHub[source]#
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'gpt2'#
Model name to use.
field task: Optional[str] = None#
Task to call the model with. Should be a task that returns generated_text. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-49 | Task to call the model with. Should be a task that returns generated_text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-50 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-51 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFacePipeline[source]#
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-52 | To use, you should have the transformers python package installed.
Only supports text-generation and text2text-generation for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2", task="text-generation"
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field model_id: str = 'gpt2'#
Model name to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-53 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-54 | Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-55 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.LlamaCpp[source]#
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-56 | path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field f16_kv: bool = True#
Use half-precision for key/value cache.
field last_n_tokens_size: Optional[int] = 64#
The number of tokens to look back when applying the repeat_penalty.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field logprobs: Optional[int] = None#
The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-57 | field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use.
If None, the number of threads is automatically determined.
field repeat_penalty: Optional[float] = 1.1#
The penalty to apply to repeated tokens.
field seed: int = -1#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = True#
Whether to stream the results, token by token.
field suffix: Optional[str] = None#
A suffix to append to the generated text. If None, no suffix is appended.
field temperature: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded in RAM
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-58 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-59 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-60 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator[Dict, None, None][source]#
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction:
Once that happens, this interface could change. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-61 | Once that happens, this interface could change.
It also calls the callback manager’s on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
“]):result = chunk[“choices”][0]
print(result[“text”], end=’’, flush=True)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Modal[source]#
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
field endpoint_url: str = ''#
model endpoint to use | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-62 | field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-63 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-64 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.NLPCloud[source]#
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-65 | To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field bad_words: List[str] = []#
List of tokens not allowed to be generated.
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
field early_stopping: bool = False#
Whether to stop beam search at num_beams sentences.
field length_no_input: bool = True#
Whether min_length and max_length should include the length of the input.
field length_penalty: float = 1.0#
Exponential penalty to the length.
field max_length: int = 256#
The maximum number of tokens to generate in the completion.
field min_length: int = 1#
The minimum number of tokens to generate in the completion.
field model_name: str = 'finetuned-gpt-neox-20b'#
Model name to use.
field num_beams: int = 1#
Number of beams for beam search.
field num_return_sequences: int = 1#
How many completions to generate for each prompt.
field remove_end_sequence: bool = True#
Whether or not to remove the end sequence token.
field remove_input: bool = True#
Remove input text from API response | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-66 | field remove_input: bool = True#
Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
field top_p: int = 1#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-67 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-68 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-69 | To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-70 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-71 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-72 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-73 | Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-74 | field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-75 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int[source]#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-76 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Petals[source]#
Wrapper around Petals Bloom models.
To use, you should have the petals python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-77 | To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field client: Any = None#
The client to use for the API calls.
field do_sample: bool = True#
Whether or not to use sampling; use greedy decoding otherwise.
field max_length: Optional[int] = None#
The maximum length of the sequence to be generated.
field max_new_tokens: int = 256#
The maximum number of new tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call
not explicitly specified.
field model_name: str = 'bigscience/bloom-petals'#
The model to use.
field temperature: float = 0.7#
What sampling temperature to use
field tokenizer: Any = None#
The tokenizer to use for the API calls.
field top_k: Optional[int] = None#
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
field top_p: float = 0.9#
The cumulative probability for top-p sampling.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-78 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-79 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-80 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PredictionGuard[source]#
Wrapper around Prediction Guard large language models.
To use, you should have the predictionguard python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-81 | To use, you should have the predictionguard python package installed, and the
environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass
it as a named parameter to the constructor.
.. rubric:: Example
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field name: Optional[str] = 'default-text-gen'#
Proxy name to use.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-82 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-83 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-84 | pydantic model langchain.llms.PromptLayerOpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-85 | Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-86 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-87 | Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-88 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAIChat[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-89 | and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-90 | field streaming: bool = False#
Whether to stream the results or not.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-91 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-92 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.RWKV[source]#
Wrapper around RWKV language models.
To use, you should have the rwkv python package installed, the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-93 | To use, you should have the rwkv python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import RWKV
model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32")
# Simplest invocation
response = model("Once upon a time, ")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field CHUNK_LEN: int = 256#
Batch size for prompt processing.
field max_tokens_per_generation: int = 256#
Maximum number of tokens to generate.
field model: str [Required]#
Path to the pre-trained RWKV model file.
field penalty_alpha_frequency: float = 0.4#
Positive values penalize new tokens based on their existing frequency
in the text so far, decreasing the model’s likelihood to repeat the same
line verbatim..
field penalty_alpha_presence: float = 0.4#
Positive values penalize new tokens based on whether they appear
in the text so far, increasing the model’s likelihood to talk about
new topics..
field rwkv_verbose: bool = True#
Print debug information.
field strategy: str = 'cpu fp32'#
Token context window.
field temperature: float = 1.0#
The temperature to use for sampling.
field tokens_path: str [Required]#
Path to the RWKV tokens file.
field top_p: float = 0.5#
The top-p value to use for sampling. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-94 | The top-p value to use for sampling.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-95 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-96 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Replicate[source]#
Wrapper around Replicate models.
To use, you should have the replicate python package installed, | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-97 | To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, …}
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-98 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-99 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SagemakerEndpoint[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-100 | To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-101 | field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-102 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-103 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-104 | pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]#
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Only supports text-generation and text2text-generation for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer
)
return pipe
hf = SelfHostedHuggingFaceLLM( | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-105 | )
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field device: int = 0#
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'gpt2'#
Hugging Face model_id to load the model.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field model_load_fn: Callable = <function _load_transformer>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'transformers', 'torch']#
Requirements to install on hardware to inference the model.
field task: str = 'text-generation'#
Hugging Face task (either “text-generation” or “text2text-generation”).
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-106 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-107 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-108 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedPipeline[source]#
Run model inference on self-hosted remote hardware. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-109 | Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-110 | my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_load_fn: Callable [Required]#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'torch']#
Requirements to install on hardware to inference the model.
__call__(prompt: str, stop: Optional[List[str]] = None) → str# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-111 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-112 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM[source]#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-113 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.StochasticAI[source]#
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-114 | To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field api_url: str = ''#
Model name to use.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-115 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-116 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Writer[source]#
Wrapper around Writer large language models.
To use, you should have the environment variable WRITER_API_KEY
set with your API key.
Example | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-117 | set with your API key.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field beam_search_diversity_rate: float = 1.0#
Only applies to beam search, i.e. when the beam width is >1.
A higher value encourages beam search to return a more diverse
set of candidates
field beam_width: Optional[int] = None#
The number of concurrent candidates to keep track of during
beam search
field length: int = 256#
The maximum number of tokens to generate in the completion.
field length_pentaly: float = 1.0#
Only applies to beam search, i.e. when the beam width is >1.
Larger values penalize long candidates more heavily, thus preferring
shorter candidates
field logprobs: bool = False#
Whether to return log probabilities.
field model_id: str = 'palmyra-base'#
Model name to use.
field random_seed: int = 0#
The model generates random results.
Changing the random seed alone will produce a different response
with similar characteristics. It is possible to reproduce results
by fixing the random seed (assuming all other hyperparameters
are also fixed)
field repetition_penalty: float = 1.0#
Penalizes repeated tokens according to frequency.
field stop: Optional[List[str]] = None# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-118 | field stop: Optional[List[str]] = None#
Sequences when completion generation will stop
field temperature: float = 1.0#
What sampling temperature to use.
field tokens_to_generate: int = 24#
Max number of tokens to generate.
field top_k: int = 1#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-119 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-120 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
previous
Writer
next
Chat Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
7336c17120d1-0 | .rst
.pdf
Tools
Tools#
Core toolkit implementations.
pydantic model langchain.tools.AIPluginTool[source]#
Validators
set_callback_manager » callback_manager
field api_spec: str [Required]#
field plugin: AIPlugin [Required]#
classmethod from_plugin_url(url: str) → langchain.tools.plugin.AIPluginTool[source]#
pydantic model langchain.tools.APIOperation[source]#
A model for a single API operation.
field base_url: str [Required]#
The base URL of the operation.
field description: Optional[str] = None#
The description of the operation.
field method: langchain.tools.openapi.utils.openapi_utils.HTTPVerb [Required]#
The HTTP method of the operation.
field operation_id: str [Required]#
The unique identifier of the operation.
field path: str [Required]#
The path of the operation.
field properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]#
field request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None#
The request body of the operation.
classmethod from_openapi_spec(spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI spec. | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-1 | Create an APIOperation from an OpenAPI spec.
classmethod from_openapi_url(spec_url: str, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI URL.
to_typescript() → str[source]#
Get typescript string representation of the operation.
static ts_type_from_python(type_: Union[str, Type, tuple, None, enum.Enum]) → str[source]#
property body_params: List[str]#
property path_params: List[str]#
property query_params: List[str]#
pydantic model langchain.tools.BaseTool[source]#
Interface LangChain tools must implement.
Validators
set_callback_manager » callback_manager
field args_schema: Optional[Type[pydantic.main.BaseModel]] = None#
Pydantic model class to validate and parse the tool’s input arguments.
field callback_manager: langchain.callbacks.base.BaseCallbackManager [Optional]#
field description: str [Required]#
field name: str [Required]#
field return_direct: bool = False#
field verbose: bool = False#
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', **kwargs: Any) → str[source]#
Run the tool asynchronously. | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-2 | Run the tool asynchronously.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', **kwargs: Any) → str[source]#
Run the tool.
property args: dict#
pydantic model langchain.tools.DuckDuckGoSearchTool[source]#
Tool that adds the capability to query the DuckDuckGo search API.
Validators
set_callback_manager » callback_manager
field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#
pydantic model langchain.tools.GooglePlacesTool[source]#
Tool that adds the capability to query the Google places API.
Validators
set_callback_manager » callback_manager
field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]#
pydantic model langchain.tools.IFTTTWebhook[source]#
IFTTT Webhook.
Parameters
name – name of the tool
description – description of the tool
url – url to hit with the json event.
Validators
set_callback_manager » callback_manager
field url: str [Required]#
pydantic model langchain.tools.OpenAPISpec[source]#
OpenAPI Model that removes misformatted parts of the spec. | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-3 | OpenAPI Model that removes misformatted parts of the spec.
field components: Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components] = None#
An element to hold various schemas for the document.
field externalDocs: Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation] = None#
Additional external documentation.
field info: openapi_schema_pydantic.v3.v3_1_0.info.Info [Required]#
REQUIRED. Provides metadata about the API. The metadata MAY be used by tooling as required.
field jsonSchemaDialect: Optional[str] = None#
The default value for the $schema keyword within [Schema Objects](#schemaObject)
contained within this OAS document. This MUST be in the form of a URI.
field openapi: str = '3.1.0'#
REQUIRED. This string MUST be the [version number](#versions)
of the OpenAPI Specification that the OpenAPI document uses.
The openapi field SHOULD be used by tooling to interpret the OpenAPI document.
This is not related to the API [info.version](#infoVersion) string.
field paths: Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]] = None#
The available paths and operations for the API.
field security: Optional[List[Dict[str, List[str]]]] = None# | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-4 | A declaration of which security mechanisms can be used across the API.
The list of values includes alternative security requirement objects that can be used.
Only one of the security requirement objects need to be satisfied to authorize a request.
Individual operations can override this definition.
To make security optional, an empty security requirement ({}) can be included in the array.
field servers: List[openapi_schema_pydantic.v3.v3_1_0.server.Server] = [Server(url='/', description=None, variables=None)]#
An array of Server Objects, which provide connectivity information to a target server.
If the servers property is not provided, or is an empty array,
the default value would be a [Server Object](#serverObject) with a [url](#serverUrl) value of /.
field tags: Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]] = None#
A list of tags used by the document with additional metadata.
The order of the tags can be used to reflect on their order by the parsing tools.
Not all tags that are used by the [Operation Object](#operationObject) must be declared.
The tags that are not declared MAY be organized randomly or based on the tools’ logic.
Each tag name in the list MUST be unique.
field webhooks: Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]] = None# | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-5 | The incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement.
Closely related to the callbacks feature, this section describes requests initiated other than by an API call,
for example by an out of band registration.
The key name is a unique string to refer to each webhook,
while the (optionally referenced) Path Item Object describes a request
that may be initiated by the API provider and the expected responses.
An [example](../examples/v3.1/webhook-example.yaml) is available.
classmethod from_file(path: Union[str, pathlib.Path]) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a file path.
classmethod from_spec_dict(spec_dict: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a dict.
classmethod from_text(text: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a text.
classmethod from_url(url: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a URL.
static get_cleaned_operation_id(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation, path: str, method: str) → str[source]# | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-6 | Get a cleaned operation id from an operation id.
get_methods_for_path(path: str) → List[str][source]#
Return a list of valid methods for the specified path.
get_operation(path: str, method: str) → openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]#
Get the operation object for a given path and HTTP method.
get_parameters_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter][source]#
Get the components for a given operation.
get_referenced_schema(ref: openapi_schema_pydantic.v3.v3_1_0.reference.Reference) → openapi_schema_pydantic.v3.v3_1_0.schema.Schema[source]#
Get a schema (or nested reference) or err.
get_request_body_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody][source]#
Get the request body for a given operation.
classmethod parse_obj(obj: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
property base_url: str#
Get the base url.
previous
Agents
next | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7336c17120d1-7 | property base_url: str#
Get the base url.
previous
Agents
next
Agent Toolkits
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/tools.html |
7afad359b4bb-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_api_answer_prompt » all fields
validate_api_request_prompt » all fields
field api_answer_chain: LLMChain [Required]#
field api_docs: str [Required]#
field api_request_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-1 | classmethod from_llm_and_api_docs(llm: langchain.schema.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-2 | build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.api.base.APIChain[source]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-3 | Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.text_splitter.TextSplitter [Optional]#
pydantic model langchain.chains.ChatVectorDBChain[source]#
Chain for chatting with a vector database.
Validators
raise_deprecation » all fields
set_callback_manager » callback_manager
set_verbose » verbose
field search_kwargs: dict [Optional]#
field top_k_docs_for_context: int = 4#
field vectorstore: VectorStore [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-4 | field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.ConstitutionalChain[source]#
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=OpenAI(), prompt=qa_prompt) | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-5 | constitutional_chain = ConstitutionalChain.from_llm(
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field chain: langchain.chains.llm.LLMChain [Required]#
field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#
field critique_chain: langchain.chains.llm.LLMChain [Required]#
field revision_chain: langchain.chains.llm.LLMChain [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-6 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-7 | serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-8 | better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-9 | by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-10 | 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix='Below is conversation between a human and an AI model.', template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-11 | 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-12 | 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-13 | It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-14 | information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-15 | suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.constitutional_ai.base.ConstitutionalChain[source]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-16 | Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.ConversationChain[source]#
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_prompt_input_variables » all fields
field memory: langchain.schema.BaseMemory [Optional]#
Default memory store.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)#
Default conversation prompt to use.
property input_keys: List[str]# | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
7afad359b4bb-17 | Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.ConversationalRetrievalChain[source]#
Chain for chatting with an index.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field max_tokens_limit: Optional[int] = None#
If set, restricts the docs to return from store based on tokens, enforced only
for StuffDocumentChain
field retriever: BaseRetriever [Required]#
Index to connect to. | /content/https://python.langchain.com/en/latest/reference/modules/chains.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.