id
stringlengths 14
16
| text
stringlengths 31
3.14k
| source
stringlengths 58
124
|
---|---|---|
281c3974db09-0 | .rst
.pdf
SerpAPI
SerpAPI#
For backwards compatiblity.
pydantic model langchain.serpapi.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async arun(query: str) → str[source]#
Use aiohttp to run query through SerpAPI and parse result.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str) → str[source]#
Run query through SerpAPI and parse result.
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/serpapi.html |
b784a0b2f8ab-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) → None[source]#
Add texts to in memory dictionary.
search(search: str) → Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) → Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
Indexes
next
Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/docstore.html |
a34233c6d0ce-0 | .rst
.pdf
Document Compressors
Document Compressors#
pydantic model langchain.retrievers.document_compressors.DocumentCompressorPipeline[source]#
Document compressor that uses a pipeline of transformers.
field transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]#
List of document filters that are chained together and run in sequence.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Transform a list of documents.
pydantic model langchain.retrievers.document_compressors.EmbeddingsFilter[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embedding document contents and queries.
field k: Optional[int] = 20#
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input | /content/https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
a34233c6d0ce-1 | Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field similarity_threshold: Optional[float] = None#
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter documents based on similarity of their embeddings to the query.
pydantic model langchain.retrievers.document_compressors.LLMChainExtractor[source]#
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for compressing documents.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context. | /content/https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
a34233c6d0ce-2 | Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress page content of raw documents.
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None) → langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#
Initialize from LLM.
pydantic model langchain.retrievers.document_compressors.LLMChainFilter[source]#
Filter that drops documents that aren’t relevant to the query.
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents. | /content/https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
a34233c6d0ce-3 | Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents based on their relevance to the query.
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#
previous
Retrievers
next
Document Transformers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
f9e722af2602-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.GuardrailsOutputParser[source]#
field guard: Any = None#
classmethod from_rail(rail_file: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
classmethod from_rail_string(rail_str: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Dict[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.ListOutputParser[source]#
Class to parse the output of an LLM call to a list.
abstract parse(text: str) → List[str][source]#
Parse the output of an LLM call. | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-1 | Parse the output of an LLM call.
pydantic model langchain.output_parsers.OutputFixingParser[source]#
Wraps a parser and tries to fix parsing errors.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-2 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) → langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.fix.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-3 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.PydanticOutputParser[source]#
field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → langchain.output_parsers.pydantic.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.RegexParser[source]#
Class to parse the output into a dictionary.
field default_output_key: Optional[str] = None#
field output_keys: List[str] [Required]#
field regex: str [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-4 | field regex: str [Required]#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.ResponseSchema[source]#
field description: str [Required]#
field name: str [Required]#
pydantic model langchain.output_parsers.RetryOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-5 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]# | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-6 | Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-7 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
f9e722af2602-8 | Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[source]#
field response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#
classmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) → langchain.output_parsers.structured.StructuredOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Any[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
previous
Example Selector
next
Chat Prompt Template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
a6412d66a7f3-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works. | /content/https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
a6412d66a7f3-1 | Recursively tries to split by different characters to find one
that works.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>)[source]#
Interface for splitting text into chunks.
async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length. | /content/https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
a6412d66a7f3-2 | Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: List[langchain.schema.Document]) → List[langchain.schema.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Transform sequence of documents by splitting them.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]# | /content/https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
a6412d66a7f3-3 | split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Document Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
a68d7de27bc2-0 | .rst
.pdf
Embeddings
Embeddings#
Wrappers around embedding modules.
pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
field compress_to_size: Optional[int] = 128#
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
field contextual_control_threshold: Optional[int] = None#
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
field control_log_additive: Optional[bool] = True#
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
field hosting: Optional[str] = 'https://api.aleph-alpha.com'# | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-1 | Optional parameter that specifies which datacenters may process the request.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field normalize: Optional[bool] = True#
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-2 | Returns
Embeddings for the text.
pydantic model langchain.embeddings.CohereEmbeddings[source]#
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(model="medium", cohere_api_key="my-api-key")
field model: str = 'large'#
Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-3 | Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME enviroment variable.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-4 | text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-5 | Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python package installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name, model_kwargs=model_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME enviroment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-6 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1# | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-7 | of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and optionally and
API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-8 | the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name"
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-9 | Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-10 | credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-11 | text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-12 | import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-13 | text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model># | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-14 | field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'#
Model name to use. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-15 | Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url) | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
a68d7de27bc2-16 | tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/embeddings.html |
74fa2602e439-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the “Model deployment name” in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Validators
build_extra » all fields
set_callback_manager » callback_manager
validate_environment » all fields
field deployment_name: str = ''#
field openai_api_base: str = ''#
field openai_api_key: str = ''#
field openai_api_type: str = 'azure'#
field openai_api_version: str = ''#
field openai_organization: str = ''#
pydantic model langchain.chat_models.ChatAnthropic[source]#
Wrapper around Anthropic’s large language model. | /content/https://python.langchain.com/en/latest/reference/modules/chat_models.html |
74fa2602e439-1 | Wrapper around Anthropic’s large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
Validators
set_callback_manager » callback_manager
validate_environment » all fields
field callback_manager: langchain.callbacks.base.BaseCallbackManager [Optional]#
field verbose: bool [Optional]#
Whether to print out response text.
pydantic model langchain.chat_models.ChatOpenAI[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
set_callback_manager » callback_manager
validate_environment » all fields
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field n: int = 1# | /content/https://python.langchain.com/en/latest/reference/modules/chat_models.html |
74fa2602e439-2 | Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt.
field openai_api_key: Optional[str] = None#
field openai_organization: Optional[str] = None#
field request_timeout: int = 60#
Timeout in seconds for the OpenAPI request.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
completion_with_retry(**kwargs: Any) → Any[source]#
Use tenacity to retry the completion call.
get_num_tokens(text: str) → int[source]#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int[source]#
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: openai/openai-cookbook
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
pydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also | /content/https://python.langchain.com/en/latest/reference/modules/chat_models.html |
74fa2602e439-3 | All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
set_callback_manager » callback_manager
validate_environment » all fields
field pl_tags: Optional[List[str]] = None#
field return_pl_id: Optional[bool] = False#
previous
Models
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/chat_models.html |
49b61cd47e96-0 | .rst
.pdf
Agent Toolkits
Agent Toolkits#
Agent toolkits.
pydantic model langchain.agents.agent_toolkits.JiraToolkit[source]#
Jira Toolkit.
field tools: List[langchain.tools.base.BaseTool] = []#
classmethod from_jira_api_wrapper(jira_api_wrapper: langchain.utilities.jira.JiraAPIWrapper) → langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit[source]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.JsonToolkit[source]#
Toolkit for interacting with a JSON spec.
field spec: langchain.tools.json.tool.JsonSpec [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.NLAToolkit[source]#
Natural Language API Toolkit Definition.
field nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]#
List of API Endpoint Tools. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-1 | List of API Endpoint Tools.
classmethod from_llm_and_ai_plugin(llm: langchain.llms.base.BaseLLM, ai_plugin: langchain.tools.plugin.AIPlugin, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_ai_plugin_url(llm: langchain.llms.base.BaseLLM, ai_plugin_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_spec(llm: langchain.llms.base.BaseLLM, spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit by creating tools for each operation. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-2 | Instantiate the toolkit by creating tools for each operation.
classmethod from_llm_and_url(llm: langchain.llms.base.BaseLLM, open_api_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit from an OpenAPI Spec URL
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools for all the API operations.
pydantic model langchain.agents.agent_toolkits.OpenAPIToolkit[source]#
Toolkit for interacting with a OpenAPI api.
field json_agent: langchain.agents.agent.AgentExecutor [Required]#
field requests_wrapper: langchain.requests.TextRequestsWrapper [Required]#
classmethod from_llm(llm: langchain.llms.base.BaseLLM, json_spec: langchain.tools.json.tool.JsonSpec, requests_wrapper: langchain.requests.TextRequestsWrapper, **kwargs: Any) → langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit[source]#
Create json agent from llm, then initialize.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-3 | Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.PowerBIToolkit[source]#
Toolkit for interacting with PowerBI dataset.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
field examples: Optional[str] = None#
field llm: langchain.schema.BaseLanguageModel [Required]#
field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.SQLDatabaseToolkit[source]#
Toolkit for interacting with SQL databases.
field db: langchain.sql_database.SQLDatabase [Required]#
field llm: langchain.llms.base.BaseLLM [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
property dialect: str#
Return string representation of dialect to use.
pydantic model langchain.agents.agent_toolkits.VectorStoreInfo[source]#
Information about a vectorstore.
field description: str [Required]#
field name: str [Required]#
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
pydantic model langchain.agents.agent_toolkits.VectorStoreRouterToolkit[source]#
Toolkit for routing between vectorstores. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-4 | Toolkit for routing between vectorstores.
field llm: langchain.llms.base.BaseLLM [Optional]#
field vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.VectorStoreToolkit[source]#
Toolkit for interacting with a vector store.
field llm: langchain.llms.base.BaseLLM [Optional]#
field vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.ZapierToolkit[source]#
Zapier Toolkit.
field tools: List[langchain.tools.base.BaseTool] = []#
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: langchain.utilities.zapier.ZapierNLAWrapper) → langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit[source]#
Create a toolkit from a ZapierNLAWrapper.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-5 | Get the tools in the toolkit.
langchain.agents.agent_toolkits.create_csv_agent(llm: langchain.llms.base.BaseLLM, path: str, pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Create csv agent by loading to a dataframe and using pandas agent. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-6 | langchain.agents.agent_toolkits.create_json_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-7 | don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, **kwargs: Any) → | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-8 | None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-9 | Construct a json agent from an LLM and tools. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-10 | langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-11 | are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-12 | Construct a json agent from an LLM and tools.
langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a pandas dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.head())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Construct a pandas agent from an LLM and dataframe. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-13 | langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.llms.base.BaseLLM, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a Power BI Dataset.\nGiven an input question, create a syntactically correct DAX query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nYou have access to tools for interacting with the Power BI Dataset. Only use the below tools. Only use the information returned by the below tools to construct your final answer. Usually I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a nice sentence that answers the question. If you receive an error back that mentions that the query was wrong try to phrase the question differently and get a new query from the question to query | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-14 | wrong try to phrase the question differently and get a new query from the question to query tool.\n\nIf the question does not seem related to the dataset, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-15 | Construct a pbi agent from an LLM and tools. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-16 | langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'Assistant is a large language model trained by OpenAI built to help users interact with a PowerBI Dataset.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. \n\nGiven an input question, create a syntactically correct DAX query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-17 | and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nUsually I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a complete sentence that answers the question. If you receive an error back that mentions that the query was wrong try to phrase the question differently and get a new query from the question to query tool.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-18 | input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-19 | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
langchain.agents.agent_toolkits.create_python_agent(llm: langchain.llms.base.BaseLLM, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Construct a python agent from an LLM and tool. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-20 | langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-21 | {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-22 | Construct a sql agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore agent from an LLM and tools. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
49b61cd47e96-23 | Construct a vectorstore agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
ddf8cd975bdf-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BaseChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
pydantic model langchain.prompts.BasePromptTemplate[source]#
Base class for all prompt templates, returning a prompt.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field output_parser: Optional[langchain.schema.BaseOutputParser] = None#
How to parse the output of calling an LLM on this formatted prompt.
dict(**kwargs: Any) → Dict[source]#
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo") | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-1 | A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example: | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-2 | Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.FewShotPromptTemplate[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]# | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-3 | dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.FewShotPromptWithTemplates[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None#
A PromptTemplate to put before the examples.
field suffix: langchain.prompts.base.StringPromptTemplate [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-4 | A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.MessagesPlaceholder[source]#
Prompt template that assumes variable is already list of messages.
format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
To a BaseMessage.
property input_variables: List[str]#
Input variables for this prompt template.
langchain.prompts.Prompt#
alias of langchain.prompts.prompt.PromptTemplate
pydantic model langchain.prompts.PromptTemplate[source]#
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field template: str [Required]#
The prompt template. | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-5 | field template: str [Required]#
The prompt template.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples – List of examples to use in the prompt.
suffix – String to go after the list of examples. Should generally
set up the user’s input.
input_variables – A list of variable names the final prompt template
will expect.
example_separator – The separator to use in between examples. Defaults
to two new line characters.
prefix – String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated. | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
ddf8cd975bdf-6 | examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str], **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt from a file.
Parameters
template_file – The path to the file containing the prompt template.
input_variables – A list of variable names the final prompt template
will expect.
Returns
The prompt loaded from the file.
classmethod from_template(template: str, **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt template from a template.
pydantic model langchain.prompts.StringPromptTemplate[source]#
String prompt should expose the format method, returning a prompt.
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) → langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/prompts.html |
61c95a6bea55-0 | .rst
.pdf
Experimental Modules
Contents
Autonomous Agents
Generative Agents
Experimental Modules#
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous Agents#
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory: Optional[langchain.schema.BaseMemory] = None, callback_manager: langchain.callbacks.base.BaseCallbackManager = None, verbose: bool = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]#
Controller model for the BabyAGI agent.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
execute_task(objective: str, task: str, k: int = 5) → str[source]#
Execute a task. | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
61c95a6bea55-1 | Execute a task.
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) → langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]#
Initialize the BabyAGI Controller.
get_next_task(result: str, task_description: str, objective: str) → List[Dict][source]#
Get the next task.
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
prioritize_tasks(this_task_id: int, objective: str) → List[Dict][source]#
Prioritize tasks.
class langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores.base.VectorStoreRetriever, chain: langchain.chains.llm.LLMChain, output_parser: langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser, tools: List[langchain.tools.base.BaseTool], feedback_tool: Optional[langchain.tools.human.tool.HumanInputRun] = None)[source]#
Agent class for interacting with Auto-GPT.
Generative Agents# | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
61c95a6bea55-2 | Agent class for interacting with Auto-GPT.
Generative Agents#
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
class langchain.experimental.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory, llm: langchain.schema.BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime.datetime = None, daily_summaries: List[str] = None)[source]#
A character with memory and innate characteristics.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
field age: Optional[int] = None#
The optional age of the character.
field daily_summaries: List[str] [Optional]#
Summary of the events in the plan that the agent took.
generate_dialogue_response(observation: str) → Tuple[bool, str][source]#
React to a given observation.
generate_reaction(observation: str) → Tuple[bool, str][source]#
React to a given observation.
get_full_header(force_refresh: bool = False) → str[source]#
Return a full header of the agent’s status, summary, and current time. | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
61c95a6bea55-3 | Return a full header of the agent’s status, summary, and current time.
get_summary(force_refresh: bool = False) → str[source]#
Return a descriptive summary of the agent.
field last_refreshed: datetime.datetime [Optional]#
The last time the character’s summary was regenerated.
field llm: langchain.schema.BaseLanguageModel [Required]#
The underlying language model.
field memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]#
The memory object that combines relevance, recency, and ‘importance’.
field name: str [Required]#
The character’s name.
field status: str [Required]#
The traits of the character you wish not to change.
summarize_related_memories(observation: str) → str[source]#
Summarize memories that are most relevant to an observation.
field summary: str = ''#
Stateful self-summary generated via reflection on the character’s memory.
field summary_refresh_seconds: int = 3600#
How frequently to re-generate the summary.
field traits: str = 'N/A'#
Permanent traits to ascribe to the character. | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
61c95a6bea55-4 | Permanent traits to ascribe to the character.
class langchain.experimental.GenerativeAgentMemory(*, llm: langchain.schema.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories')[source]#
add_memory(memory_content: str) → List[str][source]#
Add an observation or memory to the agent’s memory.
field aggregate_importance: float = 0.0#
Track the sum of the ‘importance’ of recent memories.
Triggers reflection when it reaches reflection_threshold.
clear() → None[source]#
Clear memory contents.
field current_plan: List[str] = []#
The current plan of the agent.
fetch_memories(observation: str) → List[langchain.schema.Document][source]# | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
61c95a6bea55-5 | Fetch related memories.
field importance_weight: float = 0.15#
How much weight to assign the memory importance.
field llm: langchain.schema.BaseLanguageModel [Required]#
The core language model.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return key-value pairs given the text input to the chain.
field memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]#
The retriever to fetch related memories.
property memory_variables: List[str]#
Input keys this memory class will load dynamically.
pause_to_reflect() → List[str][source]#
Reflect on recent observations and generate ‘insights’.
field reflection_threshold: Optional[float] = None#
When aggregate_importance exceeds reflection_threshold, stop to reflect.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save the context of this model run to memory.
previous
Utilities
next
LangChain Ecosystem
Contents
Autonomous Agents
Generative Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/experimental.html |
3a6668428edb-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field bearer_token: str [Required]#
field filter: Optional[dict] = None#
field top_k: int = 3#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ContextualCompressionRetriever[source]#
Retriever that wraps a base retriever and compresses the results.
field base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#
Compressor for compressing retrieved documents.
field base_retriever: langchain.schema.BaseRetriever [Required]#
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-1 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
api_key: Optional[str]#
datastore_url: str#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
top_k: Optional[int]#
class langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url. | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-2 | elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) → List[str][source]#
Run more texts through the embeddings and add to the retriver.
Parameters
texts – Iterable of strings to add to the retriever.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) → langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]# | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-3 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#
field alpha: float = 0.5#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field sparse_encoder: Any = None#
field top_k: int = 4#
add_texts(texts: List[str], ids: Optional[List[str]] = None) → None[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]# | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-4 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.RemoteLangChainRetriever[source]#
field headers: Optional[dict] = None#
field input_key: str = 'message'#
field metadata_key: str = 'metadata'#
field page_content_key: str = 'page_content'#
field response_key: str = 'response'#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SVMRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-5 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.svm.SVMRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]#
field k: int = 4#
field tfidf_array: Any = None#
field vectorizer: Any = None#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) → langchain.retrievers.tfidf.TFIDFRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-6 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#
Retriever combining embededing similarity with recency.
field decay_rate: float = 0.01#
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document] [Optional]#
The memory_stream of documents to search through.
field other_score_keys: List[str] = []#
Other keys in the metadata to factor into the score, e.g. ‘importance’.
field search_kwargs: dict [Optional]#
Keyword arguments to pass to the vectorstore similarity search.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The vectorstore to store documents and determine salience.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore. | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-7 | Add documents to vectorstore.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_salient_docs(query: str) → Dict[int, Tuple[langchain.schema.Document, float]][source]#
Return documents that are salient to the query.
class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extra = 'forbid'#
add_documents(docs: List[langchain.schema.Document]) → List[str][source]#
Upload documents to Weaviate.
async aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Look up similar documents in Weaviate.
previous
Vector Stores
next | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
3a6668428edb-8 | Look up similar documents in Weaviate.
previous
Vector Stores
next
Document Compressors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/retrievers.html |
eeda90637871-0 | .md
.pdf
Llama.cpp
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Llama.cpp#
This page covers how to use llama.cpp within LangChain.
It is broken into two parts: installation and setup, and then references to specific Llama-cpp wrappers.
Installation and Setup#
Install the Python package with pip install llama-cpp-python
Download one of the supported models and convert them to the llama.cpp format per the instructions
Wrappers#
LLM#
There exists a LlamaCpp LLM wrapper, which you can access with
from langchain.llms import LlamaCpp
For a more detailed walkthrough of this, see this notebook
Embeddings#
There exists a LlamaCpp Embeddings wrapper, which you can access with
from langchain.embeddings import LlamaCppEmbeddings
For a more detailed walkthrough of this, see this notebook
previous
Jina
next
Milvus
Contents
Installation and Setup
Wrappers
LLM
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/llamacpp.html |
74e12f0eb073-0 | .md
.pdf
Pinecone
Contents
Installation and Setup
Wrappers
VectorStore
Pinecone#
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
Installation and Setup#
Install the Python SDK with pip install pinecone-client
Wrappers#
VectorStore#
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Pinecone
For a more detailed walkthrough of the Pinecone wrapper, see this notebook
previous
PGVector
next
Prediction Guard
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/pinecone.html |
e85cb5755044-0 | .md
.pdf
Hugging Face
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Datasets
Hugging Face#
This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.
Installation and Setup#
If you want to work with the Hugging Face Hub:
Install the Hub client library with pip install huggingface_hub
Create a Hugging Face account (it’s free!)
Create an access token and set it as an environment variable (HUGGINGFACEHUB_API_TOKEN)
If you want work with the Hugging Face Python libraries:
Install pip install transformers for working with models and tokenizers
Install pip install datasets for working with datasets
Wrappers#
LLM#
There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: text2text-generation, text-generation
To use the local pipeline wrapper:
from langchain.llms import HuggingFacePipeline
To use a the wrapper for a model hosted on Hugging Face Hub:
from langchain.llms import HuggingFaceHub
For a more detailed walkthrough of the Hugging Face Hub wrapper, see this notebook
Embeddings#
There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for sentence-transformers models.
To use the local pipeline wrapper: | /content/https://python.langchain.com/en/latest/ecosystem/huggingface.html |
e85cb5755044-1 | To use the local pipeline wrapper:
from langchain.embeddings import HuggingFaceEmbeddings
To use a the wrapper for a model hosted on Hugging Face Hub:
from langchain.embeddings import HuggingFaceHubEmbeddings
For a more detailed walkthrough of this, see this notebook
Tokenizer#
There are several places you can use tokenizers available through the transformers package.
By default, it is used to count tokens for all LLMs.
You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
For a more detailed walkthrough of this, see this notebook
Datasets#
The Hugging Face Hub has lots of great datasets that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see this notebook
previous
Helicone
next
Jina
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Datasets
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/huggingface.html |
47f33361edbc-0 | .ipynb
.pdf
Comet
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
Comet#
In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.
Example Project: Comet with LangChain
Install Comet and Dependencies#
%pip install comet_ml langchain openai google-search-results spacy textstat pandas
import sys
!{sys.executable} -m spacy download en_core_web_sm
Initialize Comet and Set your Credentials#
You can grab your Comet API Key here or click the link after initializing Comet
import comet_ml
comet_ml.init(project_name="comet-example-langchain")
Set OpenAI and SerpAPI credentials#
You will need an OpenAI API Key and a SerpAPI API Key to run the following examples
import os
os.environ["OPENAI_API_KEY"] = "..."
#os.environ["OPENAI_ORGANIZATION"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
Scenario 1: Using just an LLM#
from datetime import datetime
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
47f33361edbc-1 | from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["llm"],
visualizations=["dep"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)
print("LLM result", llm_result)
comet_callback.flush_tracker(llm, finish=True)
Scenario 2: Using an LLM in a Chain#
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
comet_callback = CometCallbackHandler(
complexity_metrics=True,
project_name="comet-example-langchain",
stream_logs=True,
tags=["synopsis-chain"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True) | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
47f33361edbc-2 | template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
Scenario 3: Using An Agent with Tools#
from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["agent"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager) | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
47f33361edbc-3 | agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
callback_manager=manager,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
comet_callback.flush_tracker(agent, finish=True)
Scenario 4: Using Custom Evaluation Metrics#
The CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let’s take a look at how this works.
In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt.
%pip install rouge-score
from rouge_score import rouge_scorer
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
class Rouge:
def __init__(self, reference):
self.reference = reference
self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True)
def compute_metric(self, generation, prompt_idx, gen_idx):
prediction = generation.text
results = self.scorer.score(target=self.reference, prediction=prediction)
return {
"rougeLsum_score": results["rougeLsum"].fmeasure,
"reference": self.reference, | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
47f33361edbc-4 | "reference": self.reference,
}
reference = """
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.
It was the first structure to reach a height of 300 metres.
It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)
Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .
"""
rouge_score = Rouge(reference=reference)
template = """Given the following article, it is your job to write a summary.
Article:
{article}
Summary: This is the summary for the above article:"""
prompt_template = PromptTemplate(input_variables=["article"], template=template)
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=False,
stream_logs=True,
tags=["custom_metrics"],
custom_metrics=rouge_score.compute_metric,
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)
test_prompts = [
{
"article": """
The tower is 324 metres (1,063 ft) tall, about the same height as
an 81-storey building, and the tallest structure in Paris. Its base is square, | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
47f33361edbc-5 | measuring 125 metres (410 ft) on each side.
During its construction, the Eiffel Tower surpassed the
Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building
in New York City was finished in 1930.
It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft).
Excluding transmitters, the Eiffel Tower is the second tallest
free-standing structure in France after the Millau Viaduct.
"""
}
]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
previous
Cohere
next
Databerry
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/comet_tracking.html |
94a318b4dfc7-0 | .md
.pdf
Zilliz
Contents
Installation and Setup
Wrappers
VectorStore
Zilliz#
This page covers how to use the Zilliz Cloud ecosystem within LangChain.
Zilliz uses the Milvus integration.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
Installation and Setup#
Install the Python SDK with pip install pymilvus
Wrappers#
VectorStore#
There exists a wrapper around Zilliz indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Milvus
For a more detailed walkthrough of the Miluvs wrapper, see this notebook
previous
Yeager.ai
next
Glossary
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/zilliz.html |
fbcb799e3586-0 | .md
.pdf
Google Serper Wrapper
Contents
Setup
Wrappers
Utility
Output
Tool
Google Serper Wrapper#
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
Setup#
Go to serper.dev to sign up for a free account
Get the api key and set it as an environment variable (SERPER_API_KEY)
Wrappers#
Utility#
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSerperAPIWrapper
You can use it as part of a Self Ask chain:
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
)
]
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True) | /content/https://python.langchain.com/en/latest/ecosystem/google_serper.html |
fbcb799e3586-1 | self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
Output#
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
For more information on this, see this page
previous
Google Search Wrapper
next
GooseAI
Contents
Setup
Wrappers
Utility
Output
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/google_serper.html |
7030d5237eb3-0 | .md
.pdf
Qdrant
Contents
Installation and Setup
Wrappers
VectorStore
Qdrant#
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
Installation and Setup#
Install the Python SDK with pip install qdrant-client
Wrappers#
VectorStore#
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Qdrant
For a more detailed walkthrough of the Qdrant wrapper, see this notebook
previous
PromptLayer
next
Replicate
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem/qdrant.html |
Subsets and Splits