id
stringlengths
14
16
text
stringlengths
31
3.14k
source
stringlengths
58
124
9eab2b1f6adf-0
Source code for langchain.docstore.wikipedia """Wrapper around wikipedia API.""" from typing import Union from langchain.docstore.base import Docstore from langchain.docstore.document import Document [docs]class Wikipedia(Docstore): """Wrapper around wikipedia API.""" def __init__(self) -> None: """Check that wikipedia package is installed.""" try: import wikipedia # noqa: F401 except ImportError: raise ValueError( "Could not import wikipedia python package. " "Please install it with `pip install wikipedia`." ) [docs] def search(self, search: str) -> Union[str, Document]: """Try to search for wiki page. If page exists, return the page summary, and a PageWithLookups object. If page does not exist, return similar entries. """ import wikipedia try: page_content = wikipedia.page(search).content url = wikipedia.page(search).url result: Union[str, Document] = Document( page_content=page_content, metadata={"page": url} ) except wikipedia.PageError: result = f"Could not find [{search}]. Similar: {wikipedia.search(search)}" except wikipedia.DisambiguationError: result = f"Could not find [{search}]. Similar: {wikipedia.search(search)}" return result By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/docstore/wikipedia.html
3a863556f846-0
Source code for langchain.utilities.searx_search """Utility for using SearxNG meta search API. SearxNG is a privacy-friendly free metasearch engine that aggregates results from `multiple search engines <https://docs.searxng.org/admin/engines/configured_engines.html>`_ and databases and supports the `OpenSearch <https://github.com/dewitt/opensearch/blob/master/opensearch-1-1-draft-6.md>`_ specification. More detailes on the installtion instructions `here. <../../ecosystem/searx.html>`_ For the search API refer to https://docs.searxng.org/dev/search_api.html Quick Start ----------- In order to use this utility you need to provide the searx host. This can be done by passing the named parameter :attr:`searx_host <SearxSearchWrapper.searx_host>` or exporting the environment variable SEARX_HOST. Note: this is the only required parameter. Then create a searx search instance like this: .. code-block:: python from langchain.utilities import SearxSearchWrapper # when the host starts with `http` SSL is disabled and the connection # is assumed to be on a private network searx_host='http://self.hosted' search = SearxSearchWrapper(searx_host=searx_host) You can now use the ``search`` instance to query the searx API. Searching ---------
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-1
Searching --------- Use the :meth:`run() <SearxSearchWrapper.run>` and :meth:`results() <SearxSearchWrapper.results>` methods to query the searx API. Other methods are are available for convenience. :class:`SearxResults` is a convenience wrapper around the raw json result. Example usage of the ``run`` method to make a search: .. code-block:: python s.run(query="what is the best search engine?") Engine Parameters ----------------- You can pass any `accepted searx search API <https://docs.searxng.org/dev/search_api.html>`_ parameters to the :py:class:`SearxSearchWrapper` instance. In the following example we are using the :attr:`engines <SearxSearchWrapper.engines>` and the ``language`` parameters: .. code-block:: python # assuming the searx host is set as above or exported as an env variable s = SearxSearchWrapper(engines=['google', 'bing'], language='es') Search Tips ----------- Searx offers a special `search syntax <https://docs.searxng.org/user/index.html#search-syntax>`_ that can also be used instead of passing engine parameters. For example the following query: .. code-block:: python s = SearxSearchWrapper("langchain library", engines=['github']) # can also be written as:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-2
# can also be written as: s = SearxSearchWrapper("langchain library !github") # or even: s = SearxSearchWrapper("langchain library !gh") In some situations you might want to pass an extra string to the search query. For example when the `run()` method is called by an agent. The search suffix can also be used as a way to pass extra parameters to searx or the underlying search engines. .. code-block:: python # select the github engine and pass the search suffix s = SearchWrapper("langchain library", query_suffix="!gh") s = SearchWrapper("langchain library") # select github the conventional google search syntax s.run("large language models", query_suffix="site:github.com") *NOTE*: A search suffix can be defined on both the instance and the method level. The resulting query will be the concatenation of the two with the former taking precedence. See `SearxNG Configured Engines <https://docs.searxng.org/admin/engines/configured_engines.html>`_ and `SearxNG Search Syntax <https://docs.searxng.org/user/index.html#id1>`_ for more details. Notes ----- This wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is better maintained than the original Searx project and offers more features. Public searxNG instances often use a rate limiter for API usage, so you might want to
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-3
use a self hosted instance and disable the rate limiter. If you are self-hosting an instance you can customize the rate limiter for your own network as described `here <https://github.com/searxng/searxng/pull/2129>`_. For a list of public SearxNG instances see https://searx.space/ """ import json from typing import Any, Dict, List, Optional import aiohttp import requests from pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator from langchain.utils import get_from_dict_or_env def _get_default_params() -> dict: return {"language": "en", "format": "json"} [docs]class SearxResults(dict): """Dict like wrapper around search api results.""" _data = "" def __init__(self, data: str): """Take a raw result from Searx and make it into a dict like object.""" json_data = json.loads(data) super().__init__(json_data) self.__dict__ = self def __str__(self) -> str: """Text representation of searx result.""" return self._data @property def results(self) -> Any: """Silence mypy for accessing this field. :meta private: """ return self.get("results") @property def answers(self) -> Any: """Helper accessor on the json result.""" return self.get("answers")
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-4
return self.get("answers") [docs]class SearxSearchWrapper(BaseModel): """Wrapper for Searx API. To use you need to provide the searx host by passing the named parameter ``searx_host`` or exporting the environment variable ``SEARX_HOST``. In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter ``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL. Example: .. code-block:: python from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://localhost:8888") Example with SSL disabled: .. code-block:: python from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) """ _result: SearxResults = PrivateAttr() searx_host: str = "" unsecure: bool = False params: dict = Field(default_factory=_get_default_params) headers: Optional[dict] = None engines: Optional[List[str]] = [] categories: Optional[List[str]] = [] query_suffix: Optional[str] = "" k: int = 10 aiosession: Optional[Any] = None
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-5
k: int = 10 aiosession: Optional[Any] = None @validator("unsecure") def disable_ssl_warnings(cls, v: bool) -> bool: """Disable SSL warnings.""" if v: # requests.urllib3.disable_warnings() try: import urllib3 urllib3.disable_warnings() except ImportError as e: print(e) return v @root_validator() def validate_params(cls, values: Dict) -> Dict: """Validate that custom searx params are merged with default ones.""" user_params = values["params"] default = _get_default_params() values["params"] = {**default, **user_params} engines = values.get("engines") if engines: values["params"]["engines"] = ",".join(engines) categories = values.get("categories") if categories: values["params"]["categories"] = ",".join(categories) searx_host = get_from_dict_or_env(values, "searx_host", "SEARX_HOST") if not searx_host.startswith("http"): print( f"Warning: missing the url scheme on host \ ! assuming secure https://{searx_host} " ) searx_host = "https://" + searx_host elif searx_host.startswith("http://"):
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-6
elif searx_host.startswith("http://"): values["unsecure"] = True cls.disable_ssl_warnings(True) values["searx_host"] = searx_host return values class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def _searx_api_query(self, params: dict) -> SearxResults: """Actual request to searx API.""" raw_result = requests.get( self.searx_host, headers=self.headers, params=params, verify=not self.unsecure, ) # test if http result is ok if not raw_result.ok: raise ValueError("Searx API returned an error: ", raw_result.text) res = SearxResults(raw_result.text) self._result = res return res async def _asearx_api_query(self, params: dict) -> SearxResults: if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.get( self.searx_host, headers=self.headers, params=params, ssl=(lambda: False if self.unsecure else None)(), ) as response: if not response.ok: raise ValueError("Searx API returned an error: ", response.text) result = SearxResults(await response.text()) self._result = result else:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-7
self._result = result else: async with self.aiosession.get( self.searx_host, headers=self.headers, params=params, verify=not self.unsecure, ) as response: if not response.ok: raise ValueError("Searx API returned an error: ", response.text) result = SearxResults(await response.text()) self._result = result return result [docs] def run( self, query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = "", **kwargs: Any, ) -> str: """Run query through Searx API and parse results. You can pass any other params to the searx query API. Args: query: The query to search for. query_suffix: Extra suffix appended to the query. engines: List of engines to use for the query. categories: List of categories to use for the query. **kwargs: extra parameters to pass to the searx API. Returns: str: The result of the query. Raises: ValueError: If an error occured with the query. Example: This will make a query to the qwant engine: .. code-block:: python from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://my.searx.host") searx.run("what is the weather in France ?", engine="qwant")
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-8
# the same result can be achieved using the `!` syntax of searx # to select the engine using `query_suffix` searx.run("what is the weather in France ?", query_suffix="!qwant") """ _params = { "q": query, } params = {**self.params, **_params, **kwargs} if self.query_suffix and len(self.query_suffix) > 0: params["q"] += " " + self.query_suffix if isinstance(query_suffix, str) and len(query_suffix) > 0: params["q"] += " " + query_suffix if isinstance(engines, list) and len(engines) > 0: params["engines"] = ",".join(engines) if isinstance(categories, list) and len(categories) > 0: params["categories"] = ",".join(categories) res = self._searx_api_query(params) if len(res.answers) > 0: toret = res.answers[0] # only return the content of the results list elif len(res.results) > 0: toret = "\n\n".join([r.get("content", "") for r in res.results[: self.k]]) else: toret = "No good search result found" return toret [docs] async def arun( self, query: str, engines: Optional[List[str]] = None,
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-9
query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = "", **kwargs: Any, ) -> str: """Asynchronously version of `run`.""" _params = { "q": query, } params = {**self.params, **_params, **kwargs} if self.query_suffix and len(self.query_suffix) > 0: params["q"] += " " + self.query_suffix if isinstance(query_suffix, str) and len(query_suffix) > 0: params["q"] += " " + query_suffix if isinstance(engines, list) and len(engines) > 0: params["engines"] = ",".join(engines) res = await self._asearx_api_query(params) if len(res.answers) > 0: toret = res.answers[0] # only return the content of the results list elif len(res.results) > 0: toret = "\n\n".join([r.get("content", "") for r in res.results[: self.k]]) else: toret = "No good search result found" return toret [docs] def results( self, query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = "", **kwargs: Any,
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-10
**kwargs: Any, ) -> List[Dict]: """Run query through Searx API and returns the results with metadata. Args: query: The query to search for. query_suffix: Extra suffix appended to the query. num_results: Limit the number of results to return. engines: List of engines to use for the query. categories: List of categories to use for the query. **kwargs: extra parameters to pass to the searx API. Returns: Dict with the following keys: { snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category: Searx category of the result. } """ _params = { "q": query, } params = {**self.params, **_params, **kwargs} if self.query_suffix and len(self.query_suffix) > 0: params["q"] += " " + self.query_suffix if isinstance(query_suffix, str) and len(query_suffix) > 0: params["q"] += " " + query_suffix if isinstance(engines, list) and len(engines) > 0: params["engines"] = ",".join(engines) if isinstance(categories, list) and len(categories) > 0: params["categories"] = ",".join(categories)
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-11
params["categories"] = ",".join(categories) results = self._searx_api_query(params).results[:num_results] if len(results) == 0: return [{"Result": "No good Search Result was found"}] return [ { "snippet": result.get("content", ""), "title": result["title"], "link": result["url"], "engines": result["engines"], "category": result["category"], } for result in results ] [docs] async def aresults( self, query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = "", **kwargs: Any, ) -> List[Dict]: """Asynchronously query with json results. Uses aiohttp. See `results` for more info. """ _params = { "q": query, } params = {**self.params, **_params, **kwargs} if self.query_suffix and len(self.query_suffix) > 0: params["q"] += " " + self.query_suffix if isinstance(query_suffix, str) and len(query_suffix) > 0: params["q"] += " " + query_suffix if isinstance(engines, list) and len(engines) > 0:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
3a863556f846-12
if isinstance(engines, list) and len(engines) > 0: params["engines"] = ",".join(engines) results = (await self._asearx_api_query(params)).results[:num_results] if len(results) == 0: return [{"Result": "No good Search Result was found"}] return [ { "snippet": result.get("content", ""), "title": result["title"], "link": result["url"], "engines": result["engines"], "category": result["category"], } for result in results ] By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html
bbea0fd2eac8-0
Source code for langchain.utilities.google_serper """Util that calls Google Search using the Serper.dev API.""" from typing import Dict, Optional import requests from pydantic.class_validators import root_validator from pydantic.main import BaseModel from langchain.utils import get_from_dict_or_env [docs]class GoogleSerperAPIWrapper(BaseModel): """Wrapper around the Serper.dev Google Search API. You can create a free API key at https://serper.dev. To use, you should have the environment variable ``SERPER_API_KEY`` set with your API key, or pass `serper_api_key` as a named parameter to the constructor. Example: .. code-block:: python from langchain import GoogleSerperAPIWrapper google_serper = GoogleSerperAPIWrapper() """ k: int = 10 gl: str = "us" hl: str = "en" serper_api_key: Optional[str] = None @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key exists in environment.""" serper_api_key = get_from_dict_or_env( values, "serper_api_key", "SERPER_API_KEY" ) values["serper_api_key"] = serper_api_key return values [docs] def run(self, query: str) -> str: """Run query through GoogleSearch and parse result."""
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html
bbea0fd2eac8-1
"""Run query through GoogleSearch and parse result.""" results = self._google_serper_search_results(query, gl=self.gl, hl=self.hl) return self._parse_results(results) def _parse_results(self, results: dict) -> str: snippets = [] if results.get("answerBox"): answer_box = results.get("answerBox", {}) if answer_box.get("answer"): return answer_box.get("answer") elif answer_box.get("snippet"): return answer_box.get("snippet").replace("\n", " ") elif answer_box.get("snippetHighlighted"): return ", ".join(answer_box.get("snippetHighlighted")) if results.get("knowledgeGraph"): kg = results.get("knowledgeGraph", {}) title = kg.get("title") entity_type = kg.get("type") if entity_type: snippets.append(f"{title}: {entity_type}.") description = kg.get("description") if description: snippets.append(description) for attribute, value in kg.get("attributes", {}).items(): snippets.append(f"{title} {attribute}: {value}.") for result in results["organic"][: self.k]: if "snippet" in result:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html
bbea0fd2eac8-2
if "snippet" in result: snippets.append(result["snippet"]) for attribute, value in result.get("attributes", {}).items(): snippets.append(f"{attribute}: {value}.") if len(snippets) == 0: return "No good Google Search Result was found" return " ".join(snippets) def _google_serper_search_results(self, search_term: str, gl: str, hl: str) -> dict: headers = { "X-API-KEY": self.serper_api_key or "", "Content-Type": "application/json", } params = {"q": search_term, "gl": gl, "hl": hl} response = requests.post( "https://google.serper.dev/search", headers=headers, params=params ) response.raise_for_status() search_results = response.json() return search_results By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html
e368f1fb41a2-0
Source code for langchain.utilities.bing_search """Util that calls Bing Search. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e """ from typing import Dict, List import requests from pydantic import BaseModel, Extra, root_validator from langchain.utils import get_from_dict_or_env [docs]class BingSearchAPIWrapper(BaseModel): """Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e """ bing_subscription_key: str bing_search_url: str k: int = 10 class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def _bing_search_results(self, search_term: str, count: int) -> List[dict]: headers = {"Ocp-Apim-Subscription-Key": self.bing_subscription_key} params = { "q": search_term, "count": count, "textDecorations": True, "textFormat": "HTML", } response = requests.get( self.bing_search_url, headers=headers, params=params # type: ignore ) response.raise_for_status() search_results = response.json()
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html
e368f1fb41a2-1
search_results = response.json() return search_results["webPages"]["value"] @root_validator(pre=True) def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and endpoint exists in environment.""" bing_subscription_key = get_from_dict_or_env( values, "bing_subscription_key", "BING_SUBSCRIPTION_KEY" ) values["bing_subscription_key"] = bing_subscription_key bing_search_url = get_from_dict_or_env( values, "bing_search_url", "BING_SEARCH_URL", # default="https://api.bing.microsoft.com/v7.0/search", ) values["bing_search_url"] = bing_search_url return values [docs] def run(self, query: str) -> str: """Run query through BingSearch and parse result.""" snippets = [] results = self._bing_search_results(query, count=self.k) if len(results) == 0: return "No good Bing Search Result was found" for result in results: snippets.append(result["snippet"]) return " ".join(snippets) [docs] def results(self, query: str, num_results: int) -> List[Dict]: """Run query through BingSearch and return metadata. Args: query: The query to search for.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html
e368f1fb41a2-2
Args: query: The query to search for. num_results: The number of results to return. Returns: A list of dictionaries with the following keys: snippet - The description of the result. title - The title of the result. link - The link to the result. """ metadata_results = [] results = self._bing_search_results(query, count=num_results) if len(results) == 0: return [{"Result": "No good Bing Search Result was found"}] for result in results: metadata_result = { "snippet": result["snippet"], "title": result["name"], "link": result["url"], } metadata_results.append(metadata_result) return metadata_results By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html
12c27cc80b84-0
Source code for langchain.utilities.apify from typing import Any, Callable, Dict, Optional from pydantic import BaseModel, root_validator from langchain.document_loaders import ApifyDatasetLoader from langchain.document_loaders.base import Document from langchain.utils import get_from_dict_or_env [docs]class ApifyWrapper(BaseModel): """Wrapper around Apify. To use, you should have the ``apify-client`` python package installed, and the environment variable ``APIFY_API_TOKEN`` set with your API key, or pass `apify_api_token` as a named parameter to the constructor. """ apify_client: Any apify_client_async: Any @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate environment. Validate that an Apify API token is set and the apify-client Python package exists in the current environment. """ apify_api_token = get_from_dict_or_env( values, "apify_api_token", "APIFY_API_TOKEN" ) try: from apify_client import ApifyClient, ApifyClientAsync values["apify_client"] = ApifyClient(apify_api_token) values["apify_client_async"] = ApifyClientAsync(apify_api_token) except ImportError: raise ValueError( "Could not import apify-client Python package. "
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html
12c27cc80b84-1
raise ValueError( "Could not import apify-client Python package. " "Please install it with `pip install apify-client`." ) return values [docs] def call_actor( self, actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None, ) -> ApifyDatasetLoader: """Run an Actor on the Apify platform and wait for results to be ready. Args: actor_id (str): The ID or name of the Actor on the Apify platform. run_input (Dict): The input object of the Actor that you're trying to run. dataset_mapping_function (Callable): A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional): Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional): Optional memory limit for the run, in megabytes. timeout_secs (int, optional): Optional timeout for the run, in seconds. Returns: ApifyDatasetLoader: A loader that will fetch the records from the Actor run's default dataset. """ actor_call = self.apify_client.actor(actor_id).call( run_input=run_input, build=build, memory_mbytes=memory_mbytes, timeout_secs=timeout_secs,
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html
12c27cc80b84-2
timeout_secs=timeout_secs, ) return ApifyDatasetLoader( dataset_id=actor_call["defaultDatasetId"], dataset_mapping_function=dataset_mapping_function, ) [docs] async def acall_actor( self, actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None, ) -> ApifyDatasetLoader: """Run an Actor on the Apify platform and wait for results to be ready. Args: actor_id (str): The ID or name of the Actor on the Apify platform. run_input (Dict): The input object of the Actor that you're trying to run. dataset_mapping_function (Callable): A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional): Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional): Optional memory limit for the run, in megabytes. timeout_secs (int, optional): Optional timeout for the run, in seconds. Returns: ApifyDatasetLoader: A loader that will fetch the records from the Actor run's default dataset. """ actor_call = await self.apify_client_async.actor(actor_id).call(
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html
12c27cc80b84-3
run_input=run_input, build=build, memory_mbytes=memory_mbytes, timeout_secs=timeout_secs, ) return ApifyDatasetLoader( dataset_id=actor_call["defaultDatasetId"], dataset_mapping_function=dataset_mapping_function, ) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/apify.html
e27bab04b16a-0
Source code for langchain.utilities.google_places_api """Chain that calls Google Places API. """ import logging from typing import Any, Dict, Optional from pydantic import BaseModel, Extra, root_validator from langchain.utils import get_from_dict_or_env [docs]class GooglePlacesAPIWrapper(BaseModel): """Wrapper around Google Places API. To use, you should have the ``googlemaps`` python package installed, **an API key for the google maps platform**, and the enviroment variable ''GPLACES_API_KEY'' set with your API key , or pass 'gplaces_api_key' as a named parameter to the constructor. By default, this will return the all the results on the input query. You can use the top_k_results argument to limit the number of results. Example: .. code-block:: python from langchain import GooglePlacesAPIWrapper gplaceapi = GooglePlacesAPIWrapper() """ gplaces_api_key: Optional[str] = None google_map_client: Any #: :meta private: top_k_results: Optional[int] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key is in your environment variable.""" gplaces_api_key = get_from_dict_or_env(
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html
e27bab04b16a-1
gplaces_api_key = get_from_dict_or_env( values, "gplaces_api_key", "GPLACES_API_KEY" ) values["gplaces_api_key"] = gplaces_api_key try: import googlemaps values["google_map_client"] = googlemaps.Client(gplaces_api_key) except ImportError: raise ValueError( "Could not import googlemaps python packge. " "Please install it with `pip install googlemaps`." ) return values [docs] def run(self, query: str) -> str: """Run Places search and get k number of places that exists that match.""" search_results = self.google_map_client.places(query)["results"] num_to_return = len(search_results) places = [] if num_to_return == 0: return "Google Places did not find any places that match the description" num_to_return = ( num_to_return if self.top_k_results is None else min(num_to_return, self.top_k_results) ) for i in range(num_to_return): result = search_results[i] details = self.fetch_place_details(result["place_id"]) if details is not None: places.append(details) return "\n".join([f"{i+1}. {item}" for i, item in enumerate(places)])
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html
e27bab04b16a-2
[docs] def fetch_place_details(self, place_id: str) -> Optional[str]: try: place_details = self.google_map_client.place(place_id) formatted_details = self.format_place_details(place_details) return formatted_details except Exception as e: logging.error(f"An Error occurred while fetching place details: {e}") return None [docs] def format_place_details(self, place_details: Dict[str, Any]) -> Optional[str]: try: name = place_details.get("result", {}).get("name", "Unkown") address = place_details.get("result", {}).get( "formatted_address", "Unknown" ) phone_number = place_details.get("result", {}).get( "formatted_phone_number", "Unknown" ) website = place_details.get("result", {}).get("website", "Unknown") formatted_details = ( f"{name}\nAddress: {address}\n" f"Phone: {phone_number}\nWebsite: {website}\n\n" ) return formatted_details except Exception as e: logging.error(f"An error occurred while formatting place details: {e}") return None By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html
97a53f2f88ea-0
Source code for langchain.utilities.python import sys from io import StringIO from typing import Dict, Optional from pydantic import BaseModel, Field [docs]class PythonREPL(BaseModel): """Simulates a standalone Python REPL.""" globals: Optional[Dict] = Field(default_factory=dict, alias="_globals") locals: Optional[Dict] = Field(default_factory=dict, alias="_locals") [docs] def run(self, command: str) -> str: """Run command with own globals/locals and returns anything printed.""" old_stdout = sys.stdout sys.stdout = mystdout = StringIO() try: exec(command, self.globals, self.locals) sys.stdout = old_stdout output = mystdout.getvalue() except Exception as e: sys.stdout = old_stdout output = str(e) return output By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/python.html
97f1ffb7ee1c-0
Source code for langchain.utilities.serpapi """Chain that calls SerpAPI. Heavily borrowed from https://github.com/ofirpress/self-ask """ import os import sys from typing import Any, Dict, Optional, Tuple import aiohttp from pydantic import BaseModel, Extra, Field, root_validator from langchain.utils import get_from_dict_or_env class HiddenPrints: """Context manager to hide prints.""" def __enter__(self) -> None: """Open file to pipe stdout to.""" self._original_stdout = sys.stdout sys.stdout = open(os.devnull, "w") def __exit__(self, *_: Any) -> None: """Close file that stdout was piped to.""" sys.stdout.close() sys.stdout = self._original_stdout [docs]class SerpAPIWrapper(BaseModel): """Wrapper around SerpAPI. To use, you should have the ``google-search-results`` python package installed, and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass `serpapi_api_key` as a named parameter to the constructor. Example: .. code-block:: python from langchain import SerpAPIWrapper serpapi = SerpAPIWrapper() """ search_engine: Any #: :meta private: params: dict = Field( default={ "engine": "google",
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html
97f1ffb7ee1c-1
params: dict = Field( default={ "engine": "google", "google_domain": "google.com", "gl": "us", "hl": "en", } ) serpapi_api_key: Optional[str] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" serpapi_api_key = get_from_dict_or_env( values, "serpapi_api_key", "SERPAPI_API_KEY" ) values["serpapi_api_key"] = serpapi_api_key try: from serpapi import GoogleSearch values["search_engine"] = GoogleSearch except ImportError: raise ValueError( "Could not import serpapi python package. " "Please install it with `pip install google-search-results`." ) return values [docs] async def arun(self, query: str) -> str: """Use aiohttp to run query through SerpAPI and parse result.""" def construct_url_and_params() -> Tuple[str, Dict[str, str]]: params = self.get_params(query) params["source"] = "python" if self.serpapi_api_key:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html
97f1ffb7ee1c-2
if self.serpapi_api_key: params["serp_api_key"] = self.serpapi_api_key params["output"] = "json" url = "https://serpapi.com/search" return url, params url, params = construct_url_and_params() if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.get(url, params=params) as response: res = await response.json() else: async with self.aiosession.get(url, params=params) as response: res = await response.json() return self._process_response(res) [docs] def run(self, query: str) -> str: """Run query through SerpAPI and parse result.""" return self._process_response(self.results(query)) [docs] def results(self, query: str) -> dict: """Run query through SerpAPI and return the raw result.""" params = self.get_params(query) with HiddenPrints(): search = self.search_engine(params) res = search.get_dict() return res [docs] def get_params(self, query: str) -> Dict[str, str]: """Get parameters for SerpAPI.""" _params = { "api_key": self.serpapi_api_key, "q": query, }
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html
97f1ffb7ee1c-3
"q": query, } params = {**self.params, **_params} return params @staticmethod def _process_response(res: dict) -> str: """Process response from SerpAPI.""" if "error" in res.keys(): raise ValueError(f"Got error from SerpAPI: {res['error']}") if "answer_box" in res.keys() and "answer" in res["answer_box"].keys(): toret = res["answer_box"]["answer"] elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): toret = res["answer_box"]["snippet"] elif ( "answer_box" in res.keys() and "snippet_highlighted_words" in res["answer_box"].keys() ): toret = res["answer_box"]["snippet_highlighted_words"][0] elif ( "sports_results" in res.keys() and "game_spotlight" in res["sports_results"].keys() ): toret = res["sports_results"]["game_spotlight"] elif ( "knowledge_graph" in res.keys() and "description" in res["knowledge_graph"].keys() ): toret = res["knowledge_graph"]["description"]
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html
97f1ffb7ee1c-4
): toret = res["knowledge_graph"]["description"] elif "snippet" in res["organic_results"][0].keys(): toret = res["organic_results"][0]["snippet"] else: toret = "No good search result found" return toret By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html
029bd555e190-0
Source code for langchain.utilities.wolfram_alpha """Util that calls WolframAlpha.""" from typing import Any, Dict, Optional from pydantic import BaseModel, Extra, root_validator from langchain.utils import get_from_dict_or_env [docs]class WolframAlphaAPIWrapper(BaseModel): """Wrapper for Wolfram Alpha. Docs for using: 1. Go to wolfram alpha and sign up for a developer account 2. Create an app and get your APP ID 3. Save your APP ID into WOLFRAM_ALPHA_APPID env variable 4. pip install wolframalpha """ wolfram_client: Any #: :meta private: wolfram_alpha_appid: Optional[str] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" wolfram_alpha_appid = get_from_dict_or_env( values, "wolfram_alpha_appid", "WOLFRAM_ALPHA_APPID" ) values["wolfram_alpha_appid"] = wolfram_alpha_appid try: import wolframalpha except ImportError: raise ImportError( "wolframalpha is not installed. " "Please install it with `pip install wolframalpha`" ) client = wolframalpha.Client(wolfram_alpha_appid)
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html
029bd555e190-1
client = wolframalpha.Client(wolfram_alpha_appid) values["wolfram_client"] = client return values [docs] def run(self, query: str) -> str: """Run query through WolframAlpha and parse result.""" res = self.wolfram_client.query(query) try: assumption = next(res.pods).text answer = next(res.results).text except StopIteration: return "Wolfram Alpha wasn't able to answer it" if answer is None or answer == "": # We don't want to return the assumption alone if answer is empty return "No good Wolfram Alpha Result was found" else: return f"Assumption: {assumption} \nAnswer: {answer}" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html
bd0945485729-0
Source code for langchain.utilities.google_search """Util that calls Google Search.""" from typing import Any, Dict, List, Optional from pydantic import BaseModel, Extra, root_validator from langchain.utils import get_from_dict_or_env [docs]class GoogleSearchAPIWrapper(BaseModel): """Wrapper for Google Search API. Adapted from: Instructions adapted from https://stackoverflow.com/questions/ 37083058/ programmatically-searching-google-in-python-using-custom-search TODO: DOCS for using it 1. Install google-api-python-client - If you don't already have a Google account, sign up. - If you have never created a Google APIs Console project, read the Managing Projects page and create a project in the Google API Console. - Install the library using pip install google-api-python-client The current version of the library is 2.70.0 at this time 2. To create an API key: - Navigate to the APIs & Services→Credentials panel in Cloud Console. - Select Create credentials, then select API key from the drop-down menu. - The API key created dialog box displays your newly created key. - You now have an API_KEY 3. Setup Custom Search Engine so you can search the entire web - Create a custom search engine in this link. - In Sites to search, add any valid URL (i.e. www.stackoverflow.com). - That’s all you have to fill up, the rest doesn’t matter. In the left-side menu, click Edit search engine → {your search engine name}
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html
bd0945485729-1
In the left-side menu, click Edit search engine → {your search engine name} → Setup Set Search the entire web to ON. Remove the URL you added from the list of Sites to search. - Under Search engine ID you’ll find the search-engine-ID. 4. Enable the Custom Search API - Navigate to the APIs & Services→Dashboard panel in Cloud Console. - Click Enable APIs and Services. - Search for Custom Search API and click on it. - Click Enable. URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis .com """ search_engine: Any #: :meta private: google_api_key: Optional[str] = None google_cse_id: Optional[str] = None k: int = 10 siterestrict: bool = False class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def _google_search_results(self, search_term: str, **kwargs: Any) -> List[dict]: cse = self.search_engine.cse() if self.siterestrict: cse = cse.siterestrict() res = cse.list(q=search_term, cx=self.google_cse_id, **kwargs).execute() return res.get("items", []) @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" google_api_key = get_from_dict_or_env(
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html
bd0945485729-2
google_api_key = get_from_dict_or_env( values, "google_api_key", "GOOGLE_API_KEY" ) values["google_api_key"] = google_api_key google_cse_id = get_from_dict_or_env(values, "google_cse_id", "GOOGLE_CSE_ID") values["google_cse_id"] = google_cse_id try: from googleapiclient.discovery import build except ImportError: raise ImportError( "google-api-python-client is not installed. " "Please install it with `pip install google-api-python-client`" ) service = build("customsearch", "v1", developerKey=google_api_key) values["search_engine"] = service return values [docs] def run(self, query: str) -> str: """Run query through GoogleSearch and parse result.""" snippets = [] results = self._google_search_results(query, num=self.k) if len(results) == 0: return "No good Google Search Result was found" for result in results: if "snippet" in result: snippets.append(result["snippet"]) return " ".join(snippets) [docs] def results(self, query: str, num_results: int) -> List[Dict]: """Run query through GoogleSearch and return metadata. Args: query: The query to search for. num_results: The number of results to return. Returns:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html
bd0945485729-3
num_results: The number of results to return. Returns: A list of dictionaries with the following keys: snippet - The description of the result. title - The title of the result. link - The link to the result. """ metadata_results = [] results = self._google_search_results(query, num=num_results) if len(results) == 0: return [{"Result": "No good Google Search Result was found"}] for result in results: metadata_result = { "title": result["title"], "link": result["link"], } if "snippet" in result: metadata_result["snippet"] = result["snippet"] metadata_results.append(metadata_result) return metadata_results By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html
16c6228ca63b-0
Source code for langchain.utilities.openweathermap """Util that calls OpenWeatherMap using PyOWM.""" from typing import Any, Dict, Optional from pydantic import Extra, root_validator from langchain.tools.base import BaseModel from langchain.utils import get_from_dict_or_env [docs]class OpenWeatherMapAPIWrapper(BaseModel): """Wrapper for OpenWeatherMap API using PyOWM. Docs for using: 1. Go to OpenWeatherMap and sign up for an API key 2. Save your API KEY into OPENWEATHERMAP_API_KEY env variable 3. pip install pyowm """ owm: Any openweathermap_api_key: Optional[str] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator(pre=True) def validate_environment(cls, values: Dict) -> Dict: """Validate that api key exists in environment.""" openweathermap_api_key = get_from_dict_or_env( values, "openweathermap_api_key", "OPENWEATHERMAP_API_KEY" ) values["openweathermap_api_key"] = openweathermap_api_key try: import pyowm except ImportError: raise ImportError( "pyowm is not installed. " "Please install it with `pip install pyowm`" ) owm = pyowm.OWM(openweathermap_api_key)
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html
16c6228ca63b-1
owm = pyowm.OWM(openweathermap_api_key) values["owm"] = owm return values def _format_weather_info(self, location: str, w: Any) -> str: detailed_status = w.detailed_status wind = w.wind() humidity = w.humidity temperature = w.temperature("celsius") rain = w.rain heat_index = w.heat_index clouds = w.clouds return ( f"In {location}, the current weather is as follows:\n" f"Detailed status: {detailed_status}\n" f"Wind speed: {wind['speed']} m/s, direction: {wind['deg']}°\n" f"Humidity: {humidity}%\n" f"Temperature: \n" f" - Current: {temperature['temp']}°C\n" f" - High: {temperature['temp_max']}°C\n" f" - Low: {temperature['temp_min']}°C\n" f" - Feels like: {temperature['feels_like']}°C\n" f"Rain: {rain}\n" f"Heat index: {heat_index}\n" f"Cloud cover: {clouds}%" ) [docs] def run(self, location: str) -> str: """Get the current weather information for a specified location.""" mgr = self.owm.weather_manager() observation = mgr.weather_at_place(location) w = observation.weather
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html
16c6228ca63b-2
w = observation.weather return self._format_weather_info(location, w) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html
1707ee299fdc-0
Source code for langchain.utilities.bash """Wrapper around subprocess to run commands.""" import subprocess from typing import List, Union [docs]class BashProcess: """Executes bash commands and returns the output.""" def __init__(self, strip_newlines: bool = False, return_err_output: bool = False): """Initialize with stripping newlines.""" self.strip_newlines = strip_newlines self.return_err_output = return_err_output [docs] def run(self, commands: Union[str, List[str]]) -> str: """Run commands and return final output.""" if isinstance(commands, str): commands = [commands] commands = ";".join(commands) try: output = subprocess.run( commands, shell=True, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ).stdout.decode() except subprocess.CalledProcessError as error: if self.return_err_output: return error.stdout.decode() return str(error) if self.strip_newlines: output = output.strip() return output By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/bash.html
b10a3c4fbce3-0
Source code for langchain.utilities.arxiv """Util that calls Arxiv.""" from typing import Any, Dict from pydantic import BaseModel, Extra, root_validator [docs]class ArxivAPIWrapper(BaseModel): """Wrapper around ArxivAPI. To use, you should have the ``arxiv`` python package installed. https://lukasschwab.me/arxiv.py/index.html This wrapper will use the Arxiv API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results of an input search. """ arxiv_client: Any #: :meta private: arxiv_exceptions: Any # :meta private: top_k_results: int = 3 ARXIV_MAX_QUERY_LENGTH = 300 class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that the python package exists in environment.""" try: import arxiv values["arxiv_search"] = arxiv.Search values["arxiv_exceptions"] = ( arxiv.ArxivError, arxiv.UnexpectedEmptyPageError, arxiv.HTTPError, ) except ImportError: raise ValueError( "Could not import arxiv python package. " "Please install it with `pip install arxiv`." ) return values [docs] def run(self, query: str) -> str: """
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html
b10a3c4fbce3-1
""" Run Arxiv search and get the document meta information. See https://lukasschwab.me/arxiv.py/index.html#Search See https://lukasschwab.me/arxiv.py/index.html#Result It uses only the most informative fields of document meta information. """ try: docs = [ f"Published: {result.updated.date()}\nTitle: {result.title}\n" f"Authors: {', '.join(a.name for a in result.authors)}\n" f"Summary: {result.summary}" for result in self.arxiv_search( # type: ignore query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results ).results() ] return "\n\n".join(docs) if docs else "No good Arxiv Result was found" except self.arxiv_exceptions as ex: return f"Arxiv exception: {ex}" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html
b77ce003f145-0
Source code for langchain.utilities.wikipedia """Util that calls Wikipedia.""" from typing import Any, Dict, Optional from pydantic import BaseModel, Extra, root_validator WIKIPEDIA_MAX_QUERY_LENGTH = 300 [docs]class WikipediaAPIWrapper(BaseModel): """Wrapper around WikipediaAPI. To use, you should have the ``wikipedia`` python package installed. This wrapper will use the Wikipedia API to conduct searches and fetch page summaries. By default, it will return the page summaries of the top-k results of an input search. """ wiki_client: Any #: :meta private: top_k_results: int = 3 lang: str = "en" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that the python package exists in environment.""" try: import wikipedia wikipedia.set_lang(values["lang"]) values["wiki_client"] = wikipedia except ImportError: raise ValueError( "Could not import wikipedia python package. " "Please install it with `pip install wikipedia`." ) return values [docs] def run(self, query: str) -> str: """Run Wikipedia search and get page summaries.""" search_results = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH]) summaries = []
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html
b77ce003f145-1
summaries = [] len_search_results = len(search_results) if len_search_results == 0: return "No good Wikipedia Search Result was found" for i in range(min(self.top_k_results, len_search_results)): summary = self.fetch_formatted_page_summary(search_results[i]) if summary is not None: summaries.append(summary) return "\n\n".join(summaries) [docs] def fetch_formatted_page_summary(self, page: str) -> Optional[str]: try: wiki_page = self.wiki_client.page(title=page, auto_suggest=False) return f"Page: {page}\nSummary: {wiki_page.summary}" except ( self.wiki_client.exceptions.PageError, self.wiki_client.exceptions.DisambiguationError, ): return None By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html
0d05d03325e7-0
Source code for langchain.utilities.powerbi """Wrapper around a Power BI endpoint.""" from __future__ import annotations import logging import os from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union import aiohttp import requests from aiohttp import ServerTimeoutError from pydantic import BaseModel, Field, root_validator from requests.exceptions import Timeout from langchain.tools.powerbi.prompt import BAD_REQUEST_RESPONSE, UNAUTHORIZED_RESPONSE _LOGGER = logging.getLogger(__name__) if TYPE_CHECKING: from azure.core.exceptions import ClientAuthenticationError from azure.identity import ChainedTokenCredential from azure.identity._internal import InteractiveCredential BASE_URL = os.getenv("POWERBI_BASE_URL", "https://api.powerbi.com/v1.0/myorg/datasets/") [docs]class PowerBIDataset(BaseModel): """Create PowerBI engine from dataset ID and credential or token. Use either the credential or a supplied token to authenticate. If both are supplied the credential is used to generate a token. The impersonated_user_name is the UPN of a user to be impersonated. If the model is not RLS enabled, this will be ignored. """ dataset_id: str table_names: List[str] group_id: Optional[str] = None
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-1
group_id: Optional[str] = None credential: Optional[Union[ChainedTokenCredential, InteractiveCredential]] = None token: Optional[str] = None impersonated_user_name: Optional[str] = None sample_rows_in_table_info: int = Field(default=1, gt=0, le=10) aiosession: Optional[aiohttp.ClientSession] = None schemas: Dict[str, str] = Field(default_factory=dict, init=False) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @root_validator(pre=True, allow_reuse=True) def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]: """Validate that at least one of token and credentials is present.""" if "token" in values or "credential" in values: return values raise ValueError("Please provide either a credential or a token.") @property def request_url(self) -> str: """Get the request url.""" if self.group_id: return f"{BASE_URL}/{self.group_id}/datasets/{self.dataset_id}/executeQueries" # noqa: E501 # pylint: disable=C0301
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-2
return f"{BASE_URL}/{self.dataset_id}/executeQueries" # noqa: E501 # pylint: disable=C0301 @property def headers(self) -> Dict[str, str]: """Get the token.""" token = None if self.token: token = self.token if self.credential: try: token = self.credential.get_token( "https://analysis.windows.net/powerbi/api/.default" ).token except Exception as exc: # pylint: disable=broad-exception-caught raise ClientAuthenticationError( "Could not get a token from the supplied credentials." ) from exc if not token: raise ClientAuthenticationError("No credential or token supplied.") return { "Content-Type": "application/json", "Authorization": "Bearer " + token, } [docs] def get_table_names(self) -> Iterable[str]: """Get names of tables available.""" return self.table_names [docs] def get_schemas(self) -> str: """Get the available schema's.""" if self.schemas: return ", ".join([f"{key}: {value}" for key, value in self.schemas.items()]) return "No known schema's yet. Use the schema_powerbi tool first." @property def table_info(self) -> str:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-3
@property def table_info(self) -> str: """Information about all tables in the database.""" return self.get_table_info() def _get_tables_to_query( self, table_names: Optional[Union[List[str], str]] = None ) -> List[str]: """Get the tables names that need to be queried.""" if table_names is not None: if ( isinstance(table_names, list) and len(table_names) > 0 and table_names[0] != "" ): return table_names if isinstance(table_names, str) and table_names != "": return [table_names] return self.table_names def _get_tables_todo(self, tables_todo: List[str]) -> List[str]: for table in tables_todo: if table in self.schemas: tables_todo.remove(table) return tables_todo def _get_schema_for_tables(self, table_names: List[str]) -> str: """Create a string of the table schemas for the supplied tables.""" schemas = [ schema for table, schema in self.schemas.items() if table in table_names ] return ", ".join(schemas) [docs] def get_table_info( self, table_names: Optional[Union[List[str], str]] = None ) -> str: """Get information about specified tables."""
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-4
) -> str: """Get information about specified tables.""" tables_requested = self._get_tables_to_query(table_names) tables_todo = self._get_tables_todo(tables_requested) for table in tables_todo: try: result = self.run( f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})" ) except Timeout: _LOGGER.warning("Timeout while getting table info for %s", table) continue except Exception as exc: # pylint: disable=broad-exception-caught if "bad request" in str(exc).lower(): return BAD_REQUEST_RESPONSE if "unauthorized" in str(exc).lower(): return UNAUTHORIZED_RESPONSE return str(exc) self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"]) return self._get_schema_for_tables(tables_requested) [docs] async def aget_table_info( self, table_names: Optional[Union[List[str], str]] = None ) -> str: """Get information about specified tables.""" tables_requested = self._get_tables_to_query(table_names) tables_todo = self._get_tables_todo(tables_requested) for table in tables_todo: try:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-5
for table in tables_todo: try: result = await self.arun( f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})" ) except ServerTimeoutError: _LOGGER.warning("Timeout while getting table info for %s", table) continue except Exception as exc: # pylint: disable=broad-exception-caught if "bad request" in str(exc).lower(): return BAD_REQUEST_RESPONSE if "unauthorized" in str(exc).lower(): return UNAUTHORIZED_RESPONSE return str(exc) self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"]) return self._get_schema_for_tables(tables_requested) [docs] def run(self, command: str) -> Any: """Execute a DAX command and return a json representing the results.""" result = requests.post( self.request_url, json={ "queries": [{"query": command}], "impersonatedUserName": self.impersonated_user_name, "serializerSettings": {"includeNulls": True}, }, headers=self.headers, timeout=10, ) result.raise_for_status() return result.json() [docs] async def arun(self, command: str) -> Any:
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-6
"""Execute a DAX command and return the result asynchronously.""" json_content = ( { "queries": [{"query": command}], "impersonatedUserName": self.impersonated_user_name, "serializerSettings": {"includeNulls": True}, }, ) if self.aiosession: async with self.aiosession.post( self.request_url, headers=self.headers, json=json_content, timeout=10 ) as response: response.raise_for_status() response_json = await response.json() return response_json async with aiohttp.ClientSession() as session: async with session.post( self.request_url, headers=self.headers, json=json_content, timeout=10 ) as response: response.raise_for_status() response_json = await response.json() return response_json def json_to_md( json_contents: List[Dict[str, Union[str, int, float]]], table_name: Optional[str] = None, ) -> str: """Converts a JSON object to a markdown table.""" output_md = "" headers = json_contents[0].keys() for header in headers: header.replace("[", ".").replace("]", "") if table_name: header.replace(f"{table_name}.", "")
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
0d05d03325e7-7
header.replace(f"{table_name}.", "") output_md += f"| {header} " output_md += "|\n" for row in json_contents: for value in row.values(): output_md += f"| {value} " output_md += "|\n" return output_md By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html
dd77afaccceb-0
Source code for langchain.chains.llm """Chain that just formats a prompt and calls an LLM.""" from __future__ import annotations from typing import Any, Dict, List, Optional, Sequence, Tuple, Union from pydantic import Extra from langchain.chains.base import Chain from langchain.input import get_colored_text from langchain.prompts.base import BasePromptTemplate from langchain.prompts.prompt import PromptTemplate from langchain.schema import BaseLanguageModel, LLMResult, PromptValue [docs]class LLMChain(Chain): """Chain to run queries against LLMs. Example: .. code-block:: python from langchain import LLMChain, OpenAI, PromptTemplate prompt_template = "Tell me a {adjective} joke" prompt = PromptTemplate( input_variables=["adjective"], template=prompt_template ) llm = LLMChain(llm=OpenAI(), prompt=prompt) """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-1
:meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: return self.apply([inputs])[0] [docs] def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult: """Generate LLM result from inputs.""" prompts, stop = self.prep_prompts(input_list) return self.llm.generate_prompt(prompts, stop) [docs] async def agenerate(self, input_list: List[Dict[str, Any]]) -> LLMResult: """Generate LLM result from inputs.""" prompts, stop = await self.aprep_prompts(input_list) return await self.llm.agenerate_prompt(prompts, stop) [docs] def prep_prompts( self, input_list: List[Dict[str, Any]] ) -> Tuple[List[PromptValue], Optional[List[str]]]: """Prepare prompts from inputs.""" stop = None if "stop" in input_list[0]: stop = input_list[0]["stop"] prompts = [] for inputs in input_list:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-2
prompts = [] for inputs in input_list: selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} prompt = self.prompt.format_prompt(**selected_inputs) _colored_text = get_colored_text(prompt.to_string(), "green") _text = "Prompt after formatting:\n" + _colored_text self.callback_manager.on_text(_text, end="\n", verbose=self.verbose) if "stop" in inputs and inputs["stop"] != stop: raise ValueError( "If `stop` is present in any inputs, should be present in all." ) prompts.append(prompt) return prompts, stop [docs] async def aprep_prompts( self, input_list: List[Dict[str, Any]] ) -> Tuple[List[PromptValue], Optional[List[str]]]: """Prepare prompts from inputs.""" stop = None if "stop" in input_list[0]: stop = input_list[0]["stop"] prompts = [] for inputs in input_list: selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} prompt = self.prompt.format_prompt(**selected_inputs) _colored_text = get_colored_text(prompt.to_string(), "green") _text = "Prompt after formatting:\n" + _colored_text
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-3
_text = "Prompt after formatting:\n" + _colored_text if self.callback_manager.is_async: await self.callback_manager.on_text( _text, end="\n", verbose=self.verbose ) else: self.callback_manager.on_text(_text, end="\n", verbose=self.verbose) if "stop" in inputs and inputs["stop"] != stop: raise ValueError( "If `stop` is present in any inputs, should be present in all." ) prompts.append(prompt) return prompts, stop [docs] def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: """Utilize the LLM generate method for speed gains.""" response = self.generate(input_list) return self.create_outputs(response) [docs] async def aapply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: """Utilize the LLM generate method for speed gains.""" response = await self.agenerate(input_list) return self.create_outputs(response) [docs] def create_outputs(self, response: LLMResult) -> List[Dict[str, str]]: """Create outputs from response.""" return [ # Get the text of the top generated string. {self.output_key: generation[0].text} for generation in response.generations ]
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-4
for generation in response.generations ] async def _acall(self, inputs: Dict[str, Any]) -> Dict[str, str]: return (await self.aapply([inputs]))[0] [docs] def predict(self, **kwargs: Any) -> str: """Format prompt with kwargs and pass to LLM. Args: **kwargs: Keys to pass to prompt template. Returns: Completion from LLM. Example: .. code-block:: python completion = llm.predict(adjective="funny") """ return self(kwargs)[self.output_key] [docs] async def apredict(self, **kwargs: Any) -> str: """Format prompt with kwargs and pass to LLM. Args: **kwargs: Keys to pass to prompt template. Returns: Completion from LLM. Example: .. code-block:: python completion = llm.predict(adjective="funny") """ return (await self.acall(kwargs))[self.output_key] [docs] def predict_and_parse(self, **kwargs: Any) -> Union[str, List[str], Dict[str, str]]: """Call predict and then parse the results.""" result = self.predict(**kwargs) if self.prompt.output_parser is not None: return self.prompt.output_parser.parse(result) else: return result
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-5
else: return result [docs] async def apredict_and_parse( self, **kwargs: Any ) -> Union[str, List[str], Dict[str, str]]: """Call apredict and then parse the results.""" result = await self.apredict(**kwargs) if self.prompt.output_parser is not None: return self.prompt.output_parser.parse(result) else: return result [docs] def apply_and_parse( self, input_list: List[Dict[str, Any]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: """Call apply and then parse the results.""" result = self.apply(input_list) return self._parse_result(result) def _parse_result( self, result: List[Dict[str, str]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: if self.prompt.output_parser is not None: return [ self.prompt.output_parser.parse(res[self.output_key]) for res in result ] else: return result [docs] async def aapply_and_parse( self, input_list: List[Dict[str, Any]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: """Call apply and then parse the results.""" result = await self.aapply(input_list)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
dd77afaccceb-6
result = await self.aapply(input_list) return self._parse_result(result) @property def _chain_type(self) -> str: return "llm_chain" [docs] @classmethod def from_string(cls, llm: BaseLanguageModel, template: str) -> Chain: """Create LLMChain from LLM and template.""" prompt_template = PromptTemplate.from_template(template) return cls(llm=llm, prompt=prompt_template) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html
771a76cf4ca1-0
Source code for langchain.chains.sequential """Chain pipeline where the outputs of one step feed directly into next.""" from typing import Dict, List from pydantic import Extra, root_validator from langchain.chains.base import Chain from langchain.input import get_color_mapping [docs]class SequentialChain(Chain): """Chain where the outputs of one chain feed directly into next.""" chains: List[Chain] input_variables: List[str] output_variables: List[str] #: :meta private: return_all: bool = False class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Return expected input keys to the chain. :meta private: """ return self.input_variables @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return self.output_variables @root_validator(pre=True) def validate_chains(cls, values: Dict) -> Dict: """Validate that the correct inputs exist for all chains.""" chains = values["chains"] input_variables = values["input_variables"] memory_keys = list() if "memory" in values and values["memory"] is not None: """Validate that prompt input variables are consistent."""
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html
771a76cf4ca1-1
"""Validate that prompt input variables are consistent.""" memory_keys = values["memory"].memory_variables if set(input_variables).intersection(set(memory_keys)): overlapping_keys = set(input_variables) & set(memory_keys) raise ValueError( f"The the input key(s) {''.join(overlapping_keys)} are found " f"in the Memory keys ({memory_keys}) - please use input and " f"memory keys that don't overlap." ) known_variables = set(input_variables + memory_keys) for chain in chains: missing_vars = set(chain.input_keys).difference(known_variables) if missing_vars: raise ValueError( f"Missing required input keys: {missing_vars}, " f"only had {known_variables}" ) overlapping_keys = known_variables.intersection(chain.output_keys) if overlapping_keys: raise ValueError( f"Chain returned keys that already exist: {overlapping_keys}" ) known_variables |= set(chain.output_keys) if "output_variables" not in values: if values.get("return_all", False): output_keys = known_variables.difference(input_variables) else: output_keys = chains[-1].output_keys values["output_variables"] = output_keys else:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html
771a76cf4ca1-2
values["output_variables"] = output_keys else: missing_vars = set(values["output_variables"]).difference(known_variables) if missing_vars: raise ValueError( f"Expected output variables that were not found: {missing_vars}." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: known_values = inputs.copy() for i, chain in enumerate(self.chains): outputs = chain(known_values, return_only_outputs=True) known_values.update(outputs) return {k: known_values[k] for k in self.output_variables} [docs]class SimpleSequentialChain(Chain): """Simple chain where the outputs of one step feed directly into next.""" chains: List[Chain] strip_outputs: bool = False input_key: str = "input" #: :meta private: output_key: str = "output" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key]
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html
771a76cf4ca1-3
:meta private: """ return [self.output_key] @root_validator() def validate_chains(cls, values: Dict) -> Dict: """Validate that chains are all single input/output.""" for chain in values["chains"]: if len(chain.input_keys) != 1: raise ValueError( "Chains used in SimplePipeline should all have one input, got " f"{chain} with {len(chain.input_keys)} inputs." ) if len(chain.output_keys) != 1: raise ValueError( "Chains used in SimplePipeline should all have one output, got " f"{chain} with {len(chain.output_keys)} outputs." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: _input = inputs[self.input_key] color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))]) for i, chain in enumerate(self.chains): _input = chain.run(_input) if self.strip_outputs: _input = _input.strip() self.callback_manager.on_text( _input, color=color_mapping[str(i)], end="\n", verbose=self.verbose ) return {self.output_key: _input} By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html
47c0e0177ae7-0
Source code for langchain.chains.transform """Chain that runs an arbitrary python function.""" from typing import Callable, Dict, List from langchain.chains.base import Chain [docs]class TransformChain(Chain): """Chain transform chain output. Example: .. code-block:: python from langchain import TransformChain transform_chain = TransformChain(input_variables=["text"], output_variables["entities"], transform=func()) """ input_variables: List[str] output_variables: List[str] transform: Callable[[Dict[str, str]], Dict[str, str]] @property def input_keys(self) -> List[str]: """Expect input keys. :meta private: """ return self.input_variables @property def output_keys(self) -> List[str]: """Return output keys. :meta private: """ return self.output_variables def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: return self.transform(inputs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/transform.html
b6cae4acac62-0
Source code for langchain.chains.loading """Functionality for loading chains.""" import json from pathlib import Path from typing import Any, Union import yaml from langchain.chains.api.base import APIChain from langchain.chains.base import Chain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain from langchain.chains.combine_documents.refine import RefineDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.hyde.base import HypotheticalDocumentEmbedder from langchain.chains.llm import LLMChain from langchain.chains.llm_bash.base import LLMBashChain from langchain.chains.llm_checker.base import LLMCheckerChain from langchain.chains.llm_math.base import LLMMathChain from langchain.chains.llm_requests import LLMRequestsChain from langchain.chains.pal.base import PALChain from langchain.chains.qa_with_sources.base import QAWithSourcesChain from langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain from langchain.chains.retrieval_qa.base import VectorDBQA from langchain.chains.sql_database.base import SQLDatabaseChain
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-1
from langchain.chains.sql_database.base import SQLDatabaseChain from langchain.llms.loading import load_llm, load_llm_from_config from langchain.prompts.loading import load_prompt, load_prompt_from_config from langchain.utilities.loading import try_load_from_hub URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/" def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain: """Load LLM chain from config dict.""" if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) else: raise ValueError("One of `prompt` or `prompt_path` must be present.") return LLMChain(llm=llm, prompt=prompt, **config)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-2
return LLMChain(llm=llm, prompt=prompt, **config) def _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder: """Load hypothetical document embedder chain from config dict.""" if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.") if "embeddings" in kwargs: embeddings = kwargs.pop("embeddings") else: raise ValueError("`embeddings` must be present.") return HypotheticalDocumentEmbedder( llm_chain=llm_chain, base_embeddings=embeddings, **config ) def _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-3
else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") if not isinstance(llm_chain, LLMChain): raise ValueError(f"Expected LLMChain, got {llm_chain}") if "document_prompt" in config: prompt_config = config.pop("document_prompt") document_prompt = load_prompt_from_config(prompt_config) elif "document_prompt_path" in config: document_prompt = load_prompt(config.pop("document_prompt_path")) else: raise ValueError( "One of `document_prompt` or `document_prompt_path` must be present." ) return StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, **config ) def _load_map_reduce_documents_chain( config: dict, **kwargs: Any ) -> MapReduceDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") if not isinstance(llm_chain, LLMChain):
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-4
if not isinstance(llm_chain, LLMChain): raise ValueError(f"Expected LLMChain, got {llm_chain}") if "combine_document_chain" in config: combine_document_chain_config = config.pop("combine_document_chain") combine_document_chain = load_chain_from_config(combine_document_chain_config) elif "combine_document_chain_path" in config: combine_document_chain = load_chain(config.pop("combine_document_chain_path")) else: raise ValueError( "One of `combine_document_chain` or " "`combine_document_chain_path` must be present." ) if "collapse_document_chain" in config: collapse_document_chain_config = config.pop("collapse_document_chain") if collapse_document_chain_config is None: collapse_document_chain = None else: collapse_document_chain = load_chain_from_config( collapse_document_chain_config ) elif "collapse_document_chain_path" in config: collapse_document_chain = load_chain(config.pop("collapse_document_chain_path")) return MapReduceDocumentsChain( llm_chain=llm_chain, combine_document_chain=combine_document_chain, collapse_document_chain=collapse_document_chain, **config, ) def _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain: if "llm" in config: llm_config = config.pop("llm")
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-5
llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) return LLMBashChain(llm=llm, prompt=prompt, **config) def _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "create_draft_answer_prompt" in config: create_draft_answer_prompt_config = config.pop("create_draft_answer_prompt") create_draft_answer_prompt = load_prompt_from_config(
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-6
create_draft_answer_prompt_config ) elif "create_draft_answer_prompt_path" in config: create_draft_answer_prompt = load_prompt( config.pop("create_draft_answer_prompt_path") ) if "list_assertions_prompt" in config: list_assertions_prompt_config = config.pop("list_assertions_prompt") list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config) elif "list_assertions_prompt_path" in config: list_assertions_prompt = load_prompt(config.pop("list_assertions_prompt_path")) if "check_assertions_prompt" in config: check_assertions_prompt_config = config.pop("check_assertions_prompt") check_assertions_prompt = load_prompt_from_config( check_assertions_prompt_config ) elif "check_assertions_prompt_path" in config: check_assertions_prompt = load_prompt( config.pop("check_assertions_prompt_path") ) if "revised_answer_prompt" in config: revised_answer_prompt_config = config.pop("revised_answer_prompt")
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-7
revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config) elif "revised_answer_prompt_path" in config: revised_answer_prompt = load_prompt(config.pop("revised_answer_prompt_path")) return LLMCheckerChain( llm=llm, create_draft_answer_prompt=create_draft_answer_prompt, list_assertions_prompt=list_assertions_prompt, check_assertions_prompt=check_assertions_prompt, revised_answer_prompt=revised_answer_prompt, **config, ) def _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path"))
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-8
prompt = load_prompt(config.pop("prompt_path")) return LLMMathChain(llm=llm, prompt=prompt, **config) def _load_map_rerank_documents_chain( config: dict, **kwargs: Any ) -> MapRerankDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") return MapRerankDocumentsChain(llm_chain=llm_chain, **config) def _load_pal_chain(config: dict, **kwargs: Any) -> PALChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-9
elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) else: raise ValueError("One of `prompt` or `prompt_path` must be present.") return PALChain(llm=llm, prompt=prompt, **config) def _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain: if "initial_llm_chain" in config: initial_llm_chain_config = config.pop("initial_llm_chain") initial_llm_chain = load_chain_from_config(initial_llm_chain_config) elif "initial_llm_chain_path" in config: initial_llm_chain = load_chain(config.pop("initial_llm_chain_path")) else: raise ValueError( "One of `initial_llm_chain` or `initial_llm_chain_config` must be present." ) if "refine_llm_chain" in config: refine_llm_chain_config = config.pop("refine_llm_chain") refine_llm_chain = load_chain_from_config(refine_llm_chain_config) elif "refine_llm_chain_path" in config: refine_llm_chain = load_chain(config.pop("refine_llm_chain_path")) else: raise ValueError(
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-10
else: raise ValueError( "One of `refine_llm_chain` or `refine_llm_chain_config` must be present." ) if "document_prompt" in config: prompt_config = config.pop("document_prompt") document_prompt = load_prompt_from_config(prompt_config) elif "document_prompt_path" in config: document_prompt = load_prompt(config.pop("document_prompt_path")) return RefineDocumentsChain( initial_llm_chain=initial_llm_chain, refine_llm_chain=refine_llm_chain, document_prompt=document_prompt, **config, ) def _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain: if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config) elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-11
def _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain: if "database" in kwargs: database = kwargs.pop("database") else: raise ValueError("`database` must be present.") if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) return SQLDatabaseChain(database=database, llm=llm, prompt=prompt, **config) def _load_vector_db_qa_with_sources_chain( config: dict, **kwargs: Any ) -> VectorDBQAWithSourcesChain: if "vectorstore" in kwargs: vectorstore = kwargs.pop("vectorstore") else: raise ValueError("`vectorstore` must be present.") if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-12
elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return VectorDBQAWithSourcesChain( combine_documents_chain=combine_documents_chain, vectorstore=vectorstore, **config, ) def _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA: if "vectorstore" in kwargs: vectorstore = kwargs.pop("vectorstore") else: raise ValueError("`vectorstore` must be present.") if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config) elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return VectorDBQA( combine_documents_chain=combine_documents_chain, vectorstore=vectorstore, **config, ) def _load_api_chain(config: dict, **kwargs: Any) -> APIChain:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-13
if "api_request_chain" in config: api_request_chain_config = config.pop("api_request_chain") api_request_chain = load_chain_from_config(api_request_chain_config) elif "api_request_chain_path" in config: api_request_chain = load_chain(config.pop("api_request_chain_path")) else: raise ValueError( "One of `api_request_chain` or `api_request_chain_path` must be present." ) if "api_answer_chain" in config: api_answer_chain_config = config.pop("api_answer_chain") api_answer_chain = load_chain_from_config(api_answer_chain_config) elif "api_answer_chain_path" in config: api_answer_chain = load_chain(config.pop("api_answer_chain_path")) else: raise ValueError( "One of `api_answer_chain` or `api_answer_chain_path` must be present." ) if "requests_wrapper" in kwargs: requests_wrapper = kwargs.pop("requests_wrapper") else: raise ValueError("`requests_wrapper` must be present.") return APIChain( api_request_chain=api_request_chain, api_answer_chain=api_answer_chain, requests_wrapper=requests_wrapper, **config, )
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-14
requests_wrapper=requests_wrapper, **config, ) def _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.") if "requests_wrapper" in kwargs: requests_wrapper = kwargs.pop("requests_wrapper") return LLMRequestsChain( llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config ) else: return LLMRequestsChain(llm_chain=llm_chain, **config) type_to_loader_dict = { "api_chain": _load_api_chain, "hyde_chain": _load_hyde_chain, "llm_chain": _load_llm_chain, "llm_bash_chain": _load_llm_bash_chain, "llm_checker_chain": _load_llm_checker_chain, "llm_math_chain": _load_llm_math_chain,
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-15
"llm_math_chain": _load_llm_math_chain, "llm_requests_chain": _load_llm_requests_chain, "pal_chain": _load_pal_chain, "qa_with_sources_chain": _load_qa_with_sources_chain, "stuff_documents_chain": _load_stuff_documents_chain, "map_reduce_documents_chain": _load_map_reduce_documents_chain, "map_rerank_documents_chain": _load_map_rerank_documents_chain, "refine_documents_chain": _load_refine_documents_chain, "sql_database_chain": _load_sql_database_chain, "vector_db_qa_with_sources_chain": _load_vector_db_qa_with_sources_chain, "vector_db_qa": _load_vector_db_qa, } def load_chain_from_config(config: dict, **kwargs: Any) -> Chain: """Load chain from Config Dict.""" if "_type" not in config: raise ValueError("Must specify a chain Type in config") config_type = config.pop("_type") if config_type not in type_to_loader_dict: raise ValueError(f"Loading {config_type} chain not supported") chain_loader = type_to_loader_dict[config_type] return chain_loader(config, **kwargs)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-16
return chain_loader(config, **kwargs) [docs]def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain: """Unified method for loading a chain from LangChainHub or local fs.""" if hub_result := try_load_from_hub( path, _load_chain_from_file, "chains", {"json", "yaml"}, **kwargs ): return hub_result else: return _load_chain_from_file(path, **kwargs) def _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain: """Load chain from file.""" # Convert file to Path object. if isinstance(file, str): file_path = Path(file) else: file_path = file # Load from either json or yaml. if file_path.suffix == ".json": with open(file_path) as f: config = json.load(f) elif file_path.suffix == ".yaml": with open(file_path, "r") as f: config = yaml.safe_load(f) else: raise ValueError("File type must be json or yaml") # Override default 'verbose' and 'memory' for the chain if "verbose" in kwargs: config["verbose"] = kwargs.pop("verbose") if "memory" in kwargs: config["memory"] = kwargs.pop("memory")
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
b6cae4acac62-17
config["memory"] = kwargs.pop("memory") # Load the chain from the config now. return load_chain_from_config(config, **kwargs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html
972b39109367-0
Source code for langchain.chains.llm_requests """Chain that hits a URL and then uses an LLM to parse results.""" from __future__ import annotations from typing import Dict, List from pydantic import Extra, Field, root_validator from langchain.chains import LLMChain from langchain.chains.base import Chain from langchain.requests import TextRequestsWrapper DEFAULT_HEADERS = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" # noqa: E501 } [docs]class LLMRequestsChain(Chain): """Chain that hits a URL and then uses an LLM to parse results.""" llm_chain: LLMChain requests_wrapper: TextRequestsWrapper = Field( default_factory=TextRequestsWrapper, exclude=True ) text_length: int = 8000 requests_key: str = "requests_result" #: :meta private: input_key: str = "url" #: :meta private: output_key: str = "output" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html
972b39109367-1
"""Will be whatever keys the prompt expects. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" try: from bs4 import BeautifulSoup # noqa: F401 except ImportError: raise ValueError( "Could not import bs4 python package. " "Please install it with `pip install bs4`." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: from bs4 import BeautifulSoup # Other keys are assumed to be needed for LLM prediction other_keys = {k: v for k, v in inputs.items() if k != self.input_key} url = inputs[self.input_key] res = self.requests_wrapper.get(url) # extract the text from the html soup = BeautifulSoup(res, "html.parser") other_keys[self.requests_key] = soup.get_text()[: self.text_length] result = self.llm_chain.predict(**other_keys) return {self.output_key: result} @property def _chain_type(self) -> str: return "llm_requests_chain"
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html
972b39109367-2
return "llm_requests_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html
a5b48539d22f-0
Source code for langchain.chains.moderation """Pass input through a moderation endpoint.""" from typing import Any, Dict, List, Optional from pydantic import root_validator from langchain.chains.base import Chain from langchain.utils import get_from_dict_or_env [docs]class OpenAIModerationChain(Chain): """Pass input through a moderation endpoint. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example: .. code-block:: python from langchain.chains import OpenAIModerationChain moderation = OpenAIModerationChain() """ client: Any #: :meta private: model_name: Optional[str] = None """Moderation model name to use.""" error: bool = False """Whether or not to error if bad content was found.""" input_key: str = "input" #: :meta private: output_key: str = "output" #: :meta private: openai_api_key: Optional[str] = None openai_organization: Optional[str] = None @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" openai_api_key = get_from_dict_or_env(
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html
a5b48539d22f-1
openai_api_key = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) openai_organization = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION", default="", ) try: import openai openai.api_key = openai_api_key if openai_organization: openai.organization = openai_organization values["client"] = openai.Moderation except ImportError: raise ValueError( "Could not import openai python package. " "Please install it with `pip install openai`." ) return values @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = "Text was found that violates OpenAI's content policy." if self.error: raise ValueError(error_str) else: return error_str return text def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: text = inputs[self.input_key] results = self.client.create(text) output = self._moderate(text, results["results"][0])
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html
a5b48539d22f-2
return {self.output_key: output} By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html
a37319839ef1-0
Source code for langchain.chains.mapreduce """Map-reduce chain. Splits up a document, sends the smaller parts to the LLM with one prompt, then combines the results with another one. """ from __future__ import annotations from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.combine_documents.base import BaseCombineDocumentsChain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.llm import LLMChain from langchain.docstore.document import Document from langchain.llms.base import BaseLLM from langchain.prompts.base import BasePromptTemplate from langchain.text_splitter import TextSplitter [docs]class MapReduceChain(Chain): """Map-reduce chain.""" combine_documents_chain: BaseCombineDocumentsChain """Chain to use to combine documents.""" text_splitter: TextSplitter """Text splitter to use.""" input_key: str = "input_text" #: :meta private: output_key: str = "output_text" #: :meta private: [docs] @classmethod def from_params( cls, llm: BaseLLM, prompt: BasePromptTemplate, text_splitter: TextSplitter ) -> MapReduceChain:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html
a37319839ef1-1
) -> MapReduceChain: """Construct a map-reduce chain that uses the chain for map and reduce.""" llm_chain = LLMChain(llm=llm, prompt=prompt) reduce_chain = StuffDocumentsChain(llm_chain=llm_chain) combine_documents_chain = MapReduceDocumentsChain( llm_chain=llm_chain, combine_document_chain=reduce_chain ) return cls( combine_documents_chain=combine_documents_chain, text_splitter=text_splitter ) class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: # Split the larger text into smaller chunks. texts = self.text_splitter.split_text(inputs[self.input_key]) docs = [Document(page_content=text) for text in texts] outputs = self.combine_documents_chain.run(input_documents=docs) return {self.output_key: outputs} By Harrison Chase © Copyright 2023, Harrison Chase.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html
a37319839ef1-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html
ac77c76c8a18-0
Source code for langchain.chains.llm_summarization_checker.base """Chain for summarization with self-verification.""" from pathlib import Path from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.sequential import SequentialChain from langchain.llms.base import BaseLLM from langchain.prompts.prompt import PromptTemplate PROMPTS_DIR = Path(__file__).parent / "prompts" CREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file( PROMPTS_DIR / "create_facts.txt", ["summary"] ) CHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file( PROMPTS_DIR / "check_facts.txt", ["assertions"] ) REVISED_SUMMARY_PROMPT = PromptTemplate.from_file( PROMPTS_DIR / "revise_summary.txt", ["checked_assertions", "summary"] ) ARE_ALL_TRUE_PROMPT = PromptTemplate.from_file( PROMPTS_DIR / "are_all_true_prompt.txt", ["checked_assertions"] ) [docs]class LLMSummarizationCheckerChain(Chain): """Chain for question-answering with self-verification. Example: .. code-block:: python from langchain import OpenAI, LLMSummarizationCheckerChain
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html
ac77c76c8a18-1
from langchain import OpenAI, LLMSummarizationCheckerChain llm = OpenAI(temperature=0.0) checker_chain = LLMSummarizationCheckerChain(llm=llm) """ llm: BaseLLM """LLM wrapper to use.""" create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT input_key: str = "query" #: :meta private: output_key: str = "result" #: :meta private: max_checks: int = 2 """Maximum number of times to check the assertions. Default to double-checking.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Return the singular input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return the singular output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html
ac77c76c8a18-2
all_true = False count = 0 output = None original_input = inputs[self.input_key] chain_input = original_input while not all_true and count < self.max_checks: chain = SequentialChain( chains=[ LLMChain( llm=self.llm, prompt=self.create_assertions_prompt, output_key="assertions", verbose=self.verbose, ), LLMChain( llm=self.llm, prompt=self.check_assertions_prompt, output_key="checked_assertions", verbose=self.verbose, ), LLMChain( llm=self.llm, prompt=self.revised_summary_prompt, output_key="revised_summary", verbose=self.verbose, ), LLMChain( llm=self.llm, output_key="all_true", prompt=self.are_all_true_prompt, verbose=self.verbose, ), ], input_variables=["summary"], output_variables=["all_true", "revised_summary"], verbose=self.verbose, ) output = chain({"summary": chain_input}) count += 1 if output["all_true"].strip() == "True": break if self.verbose: print(output["revised_summary"]) chain_input = output["revised_summary"] if not output: raise ValueError("No output from chain")
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html
ac77c76c8a18-3
if not output: raise ValueError("No output from chain") return {self.output_key: output["revised_summary"].strip()} @property def _chain_type(self) -> str: return "llm_summarization_checker_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html
46bc1c5110e6-0
Source code for langchain.chains.hyde.base """Hypothetical Document Embeddings. https://arxiv.org/abs/2212.10496 """ from __future__ import annotations from typing import Dict, List import numpy as np from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.hyde.prompts import PROMPT_MAP from langchain.chains.llm import LLMChain from langchain.embeddings.base import Embeddings from langchain.llms.base import BaseLLM [docs]class HypotheticalDocumentEmbedder(Chain, Embeddings): """Generate hypothetical document for query, and then embed that. Based on https://arxiv.org/abs/2212.10496 """ base_embeddings: Embeddings llm_chain: LLMChain class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Input keys for Hyde's LLM chain.""" return self.llm_chain.input_keys @property def output_keys(self) -> List[str]: """Output keys for Hyde's LLM chain.""" return self.llm_chain.output_keys [docs] def embed_documents(self, texts: List[str]) -> List[List[float]]: """Call the base embeddings."""
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html
46bc1c5110e6-1
"""Call the base embeddings.""" return self.base_embeddings.embed_documents(texts) [docs] def combine_embeddings(self, embeddings: List[List[float]]) -> List[float]: """Combine embeddings into final embeddings.""" return list(np.array(embeddings).mean(axis=0)) [docs] def embed_query(self, text: str) -> List[float]: """Generate a hypothetical document and embedded it.""" var_name = self.llm_chain.input_keys[0] result = self.llm_chain.generate([{var_name: text}]) documents = [generation.text for generation in result.generations[0]] embeddings = self.embed_documents(documents) return self.combine_embeddings(embeddings) def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: """Call the internal llm chain.""" return self.llm_chain._call(inputs) [docs] @classmethod def from_llm( cls, llm: BaseLLM, base_embeddings: Embeddings, prompt_key: str ) -> HypotheticalDocumentEmbedder: """Load and use LLMChain for a specific prompt key.""" prompt = PROMPT_MAP[prompt_key] llm_chain = LLMChain(llm=llm, prompt=prompt)
/content/https://python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html