id
stringlengths
14
16
text
stringlengths
31
3.14k
source
stringlengths
58
124
9d4fc9925642-9
"I found several skirts that may interest you. Please take a look at the following products: Avenue Plus Size Denim Stretch Skirt, LoveShackFancy Ruffled Mini Skirt - Antique White, Nike Dri-Fit Club Golf Skirt - Active Pink, Skims Soft Lounge Ruched Long Skirt, French Toast Girl's Front Pleated Skirt with Tabs, Alexia Admor Women's Harmonie Mini Skirt Pink Pink, Vero Moda Long Skirt, Nike Court Dri-FIT Victory Flouncy Tennis Skirt Women - White/Black, Haoyuan Mini Pleated Skirts W, and Zimmermann Lyre Midi Skirt.", 'Based on the API response, you may want to consider the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, or the ASUS ROG Strix G10DK-RS756, as they all offer powerful processors and plenty of RAM.', 'Based on the API response, the best budget cameras are the DJI Mini 2 Dog Camera ($448.50), Insta360 Sphere with Landing Pad ($429.99), DJI FPV Gimbal Camera ($121.06), Parrot Camera & Body ($36.19), and DJI FPV Air Unit ($179.00).'] Evaluate the requests chain# The API Chain has two main components: Translate the user query to an API request (request synthesizer) Translate the API response to a natural language response Here, we construct an evaluation chain to grade the request synthesizer against selected human queries import json truth_queries = [json.dumps(data["expected_query"]) for data in dataset] # Collect the API queries generated by the chain
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-10
# Collect the API queries generated by the chain predicted_queries = [output["intermediate_steps"]["request_args"] for output in chain_outputs] from langchain.prompts import PromptTemplate template = """You are trying to answer the following question by querying an API: > Question: {question} The query you know you should be executing against the API is: > Query: {truth_query} Is the following predicted query semantically the same (eg likely to produce the same answer)? > Predicted Query: {predict_query} Please give the Predicted Query a grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>' > Explanation: Let's think step by step.""" prompt = PromptTemplate.from_template(template) eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose) request_eval_results = [] for question, predict_query, truth_query in list(zip(questions, predicted_queries, truth_queries)): eval_output = eval_chain.run( question=question, truth_query=truth_query, predict_query=predict_query, ) request_eval_results.append(eval_output) request_eval_results
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-11
request_eval_results [' The original query is asking for all iPhone models, so the "q" parameter is correct. The "max_price" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, "size" and "min_price". The "size" parameter is not necessary, as it is not relevant to the question being asked. The "min_price" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F', " The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-12
' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters "size" and "min_price", which are not necessary for the original query. The "size" parameter is not relevant to the question, and the "min_price" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F', " The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-13
' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D', ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C', ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-14
' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F'] import re from typing import List # Parse the evaluation chain responses into a rubric def parse_eval_results(results: List[str]) -> List[float]: rubric = { "A": 1.0, "B": 0.75, "C": 0.5, "D": 0.25, "F": 0 } return [rubric[re.search(r'Final Grade: (\w+)', res).group(1)] for res in results] parsed_results = parse_eval_results(request_eval_results) # Collect the scores for a final evaluation table scores['request_synthesizer'].extend(parsed_results) Evaluate the Response Chain# The second component translated the structured API response to a natural language response. Evaluate this against the user’s original question. from langchain.prompts import PromptTemplate template = """You are trying to answer the following question by querying an API: > Question: {question} The API returned a response of: > API result: {api_response} Your response to the user: {answer}
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-15
> API result: {api_response} Your response to the user: {answer} Please evaluate the accuracy and utility of your response to the user's original question, conditioned on the information available. Give a letter grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>' > Explanation: Let's think step by step.""" prompt = PromptTemplate.from_template(template) eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose) # Extract the API responses from the chain api_responses = [output["intermediate_steps"]["response_text"] for output in chain_outputs] # Run the grader chain response_eval_results = [] for question, api_response, answer in list(zip(questions, api_responses, answers)): request_eval_results.append(eval_chain.run(question=question, api_response=api_response, answer=answer)) request_eval_results
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-16
request_eval_results [' The original query is asking for all iPhone models, so the "q" parameter is correct. The "max_price" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, "size" and "min_price". The "size" parameter is not necessary, as it is not relevant to the question being asked. The "min_price" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F', " The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-17
' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters "size" and "min_price", which are not necessary for the original query. The "size" parameter is not relevant to the question, and the "min_price" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D', ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F', " The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-18
' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D', ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C', ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-19
' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F', ' The user asked a question about what iPhone models are available, and the API returned a response with 10 different models. The response provided by the user accurately listed all 10 models, so the accuracy of the response is A+. The utility of the response is also A+ since the user was able to get the exact information they were looking for. Final Grade: A+', " The API response provided a list of laptops with their prices and attributes. The user asked if there were any budget laptops, and the response provided a list of laptops that are all priced under $500. Therefore, the response was accurate and useful in answering the user's question. Final Grade: A", " The API response provided the name, price, and URL of the product, which is exactly what the user asked for. The response also provided additional information about the product's attributes, which is useful for the user to make an informed decision. Therefore, the response is accurate and useful. Final Grade: A", " The API response provided a list of tablets that are under $400. The response accurately answered the user's question. Additionally, the response provided useful information such as the product name, price, and attributes. Therefore, the response was accurate and useful. Final Grade: A",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-20
" The API response provided a list of headphones with their respective prices and attributes. The user asked for the best headphones, so the response should include the best headphones based on the criteria provided. The response provided a list of headphones that are all from the same brand (Apple) and all have the same type of headphone (True Wireless, In-Ear). This does not provide the user with enough information to make an informed decision about which headphones are the best. Therefore, the response does not accurately answer the user's question. Final Grade: F", ' The API response provided a list of laptops with their attributes, which is exactly what the user asked for. The response provided a comprehensive list of the top rated laptops, which is what the user was looking for. The response was accurate and useful, providing the user with the information they needed. Final Grade: A', ' The API response provided a list of shoes from both Adidas and Nike, which is exactly what the user asked for. The response also included the product name, price, and attributes for each shoe, which is useful information for the user to make an informed decision. The response also included links to the products, which is helpful for the user to purchase the shoes. Therefore, the response was accurate and useful. Final Grade: A',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-21
" The API response provided a list of skirts that could potentially meet the user's needs. The response also included the name, price, and attributes of each skirt. This is a great start, as it provides the user with a variety of options to choose from. However, the response does not provide any images of the skirts, which would have been helpful for the user to make a decision. Additionally, the response does not provide any information about the availability of the skirts, which could be important for the user. \n\nFinal Grade: B", ' The user asked for a professional desktop PC with no budget constraints. The API response provided a list of products that fit the criteria, including the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, and the ASUS ROG Strix G10DK-RS756. The response accurately suggested these three products as they all offer powerful processors and plenty of RAM. Therefore, the response is accurate and useful. Final Grade: A', " The API response provided a list of cameras with their prices, which is exactly what the user asked for. The response also included additional information such as features and memory cards, which is not necessary for the user's question but could be useful for further research. The response was accurate and provided the user with the information they needed. Final Grade: A"] # Reusing the rubric from above, parse the evaluation chain responses parsed_response_results = parse_eval_results(request_eval_results) # Collect the scores for a final evaluation table scores['result_synthesizer'].extend(parsed_response_results) # Print out Score statistics for the evaluation session
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-22
# Print out Score statistics for the evaluation session header = "{:<20}\t{:<10}\t{:<10}\t{:<10}".format("Metric", "Min", "Mean", "Max") print(header) for metric, metric_scores in scores.items(): mean_scores = sum(metric_scores) / len(metric_scores) if len(metric_scores) > 0 else float('nan') row = "{:<20}\t{:<10.2f}\t{:<10.2f}\t{:<10.2f}".format(metric, min(metric_scores), mean_scores, max(metric_scores)) print(row) Metric Min Mean Max completed 1.00 1.00 1.00 request_synthesizer 0.00 0.23 1.00 result_synthesizer 0.00 0.55 1.00 # Re-show the examples for which the chain failed to complete failed_examples [] Generating Test Datasets# To evaluate a chain against your own endpoint, you’ll want to generate a test dataset that’s conforms to the API. This section provides an overview of how to bootstrap the process. First, we’ll parse the OpenAPI Spec. For this example, we’ll Speak’s OpenAPI specification. # Load and parse the OpenAPI Spec spec = OpenAPISpec.from_url("https://api.speak.com/openapi.yaml") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-23
Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. # List the paths in the OpenAPI Spec paths = sorted(spec.paths.keys()) paths ['/v1/public/openai/explain-phrase', '/v1/public/openai/explain-task', '/v1/public/openai/translate'] # See which HTTP Methods are available for a given path methods = spec.get_methods_for_path('/v1/public/openai/explain-task') methods ['post'] # Load a single endpoint operation operation = APIOperation.from_openapi_spec(spec, '/v1/public/openai/explain-task', 'post') # The operation can be serialized as typescript print(operation.to_typescript()) type explainTask = (_: { /* Description of the task that the user wants to accomplish or do. For example, "tell the waiter they messed up my order" or "compliment someone on their shirt" */ task_description?: string, /* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks "how do i ask a girl out in mexico city", the value should be "Spanish" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */ learning_language?: string,
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-24
learning_language?: string, /* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish, French). */ native_language?: string, /* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */ additional_context?: string, /* Full text of the user's question. */ full_query?: string, }) => any; # Compress the service definition to avoid leaking too much input structure to the sample data template = """In 20 words or less, what does this service accomplish? {spec} Function: It's designed to """ prompt = PromptTemplate.from_template(template) generation_chain = LLMChain(llm=llm, prompt=prompt) purpose = generation_chain.run(spec=operation.to_typescript()) template = """Write a list of {num_to_generate} unique messages users might send to a service designed to{purpose} They must each be completely unique. 1.""" def parse_list(text: str) -> List[str]: # Match lines starting with a number then period # Strip leading and trailing whitespace matches = re.findall(r'^\d+\. ', text) return [re.sub(r'^\d+\. ', '', q).strip().strip('"') for q in text.split('\n')]
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-25
num_to_generate = 10 # How many examples to use for this test set. prompt = PromptTemplate.from_template(template) generation_chain = LLMChain(llm=llm, prompt=prompt) text = generation_chain.run(purpose=purpose, num_to_generate=num_to_generate) # Strip preceding numeric bullets queries = parse_list(text) queries ["Can you explain how to say 'hello' in Spanish?", "I need help understanding the French word for 'goodbye'.", "Can you tell me how to say 'thank you' in German?", "I'm trying to learn the Italian word for 'please'.", "Can you help me with the pronunciation of 'yes' in Portuguese?", "I'm looking for the Dutch word for 'no'.", "Can you explain the meaning of 'hello' in Japanese?", "I need help understanding the Russian word for 'thank you'.", "Can you tell me how to say 'goodbye' in Chinese?", "I'm trying to learn the Arabic word for 'please'."] # Define the generation chain to get hypotheses api_chain = OpenAPIEndpointChain.from_api_operation( operation, llm, requests=Requests(), verbose=verbose, return_intermediate_steps=True # Return request and response text ) predicted_outputs =[api_chain(query) for query in queries] request_args = [output["intermediate_steps"]["request_args"] for output in predicted_outputs] # Show the generated request
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-26
# Show the generated request request_args ['{"task_description": "say \'hello\'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say \'hello\' in Spanish?"}', '{"task_description": "understanding the French word for \'goodbye\'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for \'goodbye\'."}', '{"task_description": "say \'thank you\'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say \'thank you\' in German?"}', '{"task_description": "Learn the Italian word for \'please\'", "learning_language": "Italian", "native_language": "English", "full_query": "I\'m trying to learn the Italian word for \'please\'."}', '{"task_description": "Help with pronunciation of \'yes\' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of \'yes\' in Portuguese?"}',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-27
'{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_language": "English", "full_query": "I\'m looking for the Dutch word for \'no\'."}', '{"task_description": "Explain the meaning of \'hello\' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of \'hello\' in Japanese?"}', '{"task_description": "understanding the Russian word for \'thank you\'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for \'thank you\'."}', '{"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say \'goodbye\' in Chinese?"}', '{"task_description": "Learn the Arabic word for \'please\'", "learning_language": "Arabic", "native_language": "English", "full_query": "I\'m trying to learn the Arabic word for \'please\'."}'] ## AI Assisted Correction
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-28
## AI Assisted Correction correction_template = """Correct the following API request based on the user's feedback. If the user indicates no changes are needed, output the original without making any changes. REQUEST: {request} User Feedback / requested changes: {user_feedback} Finalized Request: """ prompt = PromptTemplate.from_template(correction_template) correction_chain = LLMChain(llm=llm, prompt=prompt) ground_truth = [] for query, request_arg in list(zip(queries, request_args)): feedback = input(f"Query: {query}\nRequest: {request_arg}\nRequested changes: ") if feedback == 'n' or feedback == 'none' or not feedback: ground_truth.append(request_arg) continue resolved = correction_chain.run(request=request_arg, user_feedback=feedback) ground_truth.append(resolved.strip()) print("Updated request:", resolved) Query: Can you explain how to say 'hello' in Spanish? Request: {"task_description": "say 'hello'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say 'hello' in Spanish?"} Requested changes: Query: I need help understanding the French word for 'goodbye'.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-29
Query: I need help understanding the French word for 'goodbye'. Request: {"task_description": "understanding the French word for 'goodbye'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for 'goodbye'."} Requested changes: Query: Can you tell me how to say 'thank you' in German? Request: {"task_description": "say 'thank you'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say 'thank you' in German?"} Requested changes: Query: I'm trying to learn the Italian word for 'please'. Request: {"task_description": "Learn the Italian word for 'please'", "learning_language": "Italian", "native_language": "English", "full_query": "I'm trying to learn the Italian word for 'please'."} Requested changes: Query: Can you help me with the pronunciation of 'yes' in Portuguese? Request: {"task_description": "Help with pronunciation of 'yes' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of 'yes' in Portuguese?"} Requested changes: Query: I'm looking for the Dutch word for 'no'.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-30
Query: I'm looking for the Dutch word for 'no'. Request: {"task_description": "Find the Dutch word for 'no'", "learning_language": "Dutch", "native_language": "English", "full_query": "I'm looking for the Dutch word for 'no'."} Requested changes: Query: Can you explain the meaning of 'hello' in Japanese? Request: {"task_description": "Explain the meaning of 'hello' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of 'hello' in Japanese?"} Requested changes: Query: I need help understanding the Russian word for 'thank you'. Request: {"task_description": "understanding the Russian word for 'thank you'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for 'thank you'."} Requested changes: Query: Can you tell me how to say 'goodbye' in Chinese? Request: {"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say 'goodbye' in Chinese?"} Requested changes: Query: I'm trying to learn the Arabic word for 'please'.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-31
Query: I'm trying to learn the Arabic word for 'please'. Request: {"task_description": "Learn the Arabic word for 'please'", "learning_language": "Arabic", "native_language": "English", "full_query": "I'm trying to learn the Arabic word for 'please'."} Requested changes: Now you can use the ground_truth as shown above in Evaluate the Requests Chain! # Now you have a new ground truth set to use as shown above! ground_truth ['{"task_description": "say \'hello\'", "learning_language": "Spanish", "native_language": "English", "full_query": "Can you explain how to say \'hello\' in Spanish?"}', '{"task_description": "understanding the French word for \'goodbye\'", "learning_language": "French", "native_language": "English", "full_query": "I need help understanding the French word for \'goodbye\'."}', '{"task_description": "say \'thank you\'", "learning_language": "German", "native_language": "English", "full_query": "Can you tell me how to say \'thank you\' in German?"}',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-32
'{"task_description": "Learn the Italian word for \'please\'", "learning_language": "Italian", "native_language": "English", "full_query": "I\'m trying to learn the Italian word for \'please\'."}', '{"task_description": "Help with pronunciation of \'yes\' in Portuguese", "learning_language": "Portuguese", "native_language": "English", "full_query": "Can you help me with the pronunciation of \'yes\' in Portuguese?"}', '{"task_description": "Find the Dutch word for \'no\'", "learning_language": "Dutch", "native_language": "English", "full_query": "I\'m looking for the Dutch word for \'no\'."}', '{"task_description": "Explain the meaning of \'hello\' in Japanese", "learning_language": "Japanese", "native_language": "English", "full_query": "Can you explain the meaning of \'hello\' in Japanese?"}', '{"task_description": "understanding the Russian word for \'thank you\'", "learning_language": "Russian", "native_language": "English", "full_query": "I need help understanding the Russian word for \'thank you\'."}',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
9d4fc9925642-33
'{"task_description": "say goodbye", "learning_language": "Chinese", "native_language": "English", "full_query": "Can you tell me how to say \'goodbye\' in Chinese?"}', '{"task_description": "Learn the Arabic word for \'please\'", "learning_language": "Arabic", "native_language": "English", "full_query": "I\'m trying to learn the Arabic word for \'please\'."}'] previous LLM Math next Question Answering Benchmarking: Paul Graham Essay Contents Load the API Chain Optional: Generate Input Questions and Request Ground Truth Queries Run the API Chain Evaluate the requests chain Evaluate the Response Chain Generating Test Datasets By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html
8836ee92a0ec-0
.ipynb .pdf Agent Benchmarking: Search + Calculator Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance Agent Benchmarking: Search + Calculator# Here we go over how to benchmark performance of an agent on tasks where it has access to a calculator and a search tool. It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up. # Comment this out if you are NOT using tracing import os os.environ["LANGCHAIN_HANDLER"] = "langchain" Loading the data# First, let’s load the data. from langchain.evaluation.loading import load_dataset dataset = load_dataset("agent-search-calculator") Setting up a chain# Now we need to load an agent capable of answering these questions. from langchain.llms import OpenAI from langchain.chains import LLMMathChain from langchain.agents import initialize_agent, Tool, load_tools from langchain.agents import AgentType tools = load_tools(['serpapi', 'llm-math'], llm=OpenAI(temperature=0)) agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) Make a prediction# First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
/content/https://python.langchain.com/en/latest/use_cases/evaluation/agent_benchmarking.html
8836ee92a0ec-1
print(dataset[0]['question']) agent.run(dataset[0]['question']) Make many predictions# Now we can make predictions agent.run(dataset[4]['question']) predictions = [] predicted_dataset = [] error_dataset = [] for data in dataset: new_data = {"input": data["question"], "answer": data["answer"]} try: predictions.append(agent(new_data)) predicted_dataset.append(new_data) except Exception as e: predictions.append({"output": str(e), **new_data}) error_dataset.append(new_data) Evaluate performance# Now we can evaluate the predictions. The first thing we can do is look at them by eye. predictions[0] Next, we can use a language model to score them programatically from langchain.evaluation.qa import QAEvalChain llm = OpenAI(temperature=0) eval_chain = QAEvalChain.from_llm(llm) graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="output") We can add in the graded output to the predictions dict and then get a count of the grades. for i, prediction in enumerate(predictions): prediction['grade'] = graded_outputs[i]['text'] from collections import Counter Counter([pred['grade'] for pred in predictions]) We can also filter the datapoints to the incorrect examples and look at them.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/agent_benchmarking.html
8836ee92a0ec-2
We can also filter the datapoints to the incorrect examples and look at them. incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"] incorrect previous Evaluation next Agent VectorDB Question Answering Benchmarking Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/agent_benchmarking.html
d2d8dee29e2d-0
.ipynb .pdf Question Answering Benchmarking: Paul Graham Essay Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance Question Answering Benchmarking: Paul Graham Essay# Here we go over how to benchmark performance on a question answering task over a Paul Graham essay. It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up. # Comment this out if you are NOT using tracing import os os.environ["LANGCHAIN_HANDLER"] = "langchain" Loading the data# First, let’s load the data. from langchain.evaluation.loading import load_dataset dataset = load_dataset("question-answering-paul-graham") Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-paul-graham-76e8f711e038d742/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) Setting up a chain# Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question. from langchain.document_loaders import TextLoader loader = TextLoader("../../modules/paul_graham_essay.txt") from langchain.indexes import VectorstoreIndexCreator
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
d2d8dee29e2d-1
from langchain.indexes import VectorstoreIndexCreator vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now we can create a question answering chain. from langchain.chains import RetrievalQA from langchain.llms import OpenAI chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question") Make a prediction# First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints chain(dataset[0]) {'question': 'What were the two main things the author worked on before college?', 'answer': 'The two main things the author worked on before college were writing and programming.', 'result': ' Writing and programming.'} Make many predictions# Now we can make predictions predictions = chain.apply(dataset) Evaluate performance# Now we can evaluate the predictions. The first thing we can do is look at them by eye. predictions[0] {'question': 'What were the two main things the author worked on before college?', 'answer': 'The two main things the author worked on before college were writing and programming.', 'result': ' Writing and programming.'} Next, we can use a language model to score them programatically
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
d2d8dee29e2d-2
Next, we can use a language model to score them programatically from langchain.evaluation.qa import QAEvalChain llm = OpenAI(temperature=0) eval_chain = QAEvalChain.from_llm(llm) graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result") We can add in the graded output to the predictions dict and then get a count of the grades. for i, prediction in enumerate(predictions): prediction['grade'] = graded_outputs[i]['text'] from collections import Counter Counter([pred['grade'] for pred in predictions]) Counter({' CORRECT': 12, ' INCORRECT': 10}) We can also filter the datapoints to the incorrect examples and look at them. incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"] incorrect[0] {'question': 'What did the author write their dissertation on?', 'answer': 'The author wrote their dissertation on applications of continuations.', 'result': ' The author does not mention what their dissertation was on, so it is not known.', 'grade': ' INCORRECT'} previous Evaluating an OpenAPI Chain next Question Answering Benchmarking: State of the Union Address Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_pg.html
e1f8a56f6cf3-0
.ipynb .pdf LLM Math Contents Setting up a chain LLM Math# Evaluating chains that know how to do math. # Comment this out if you are NOT using tracing import os os.environ["LANGCHAIN_HANDLER"] = "langchain" from langchain.evaluation.loading import load_dataset dataset = load_dataset("llm-math") Downloading and preparing dataset json/LangChainDatasets--llm-math to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--llm-math-509b11d101165afa/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--llm-math-509b11d101165afa/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data. Setting up a chain# Now we need to create some pipelines for doing math. from langchain.llms import OpenAI from langchain.chains import LLMMathChain llm = OpenAI() chain = LLMMathChain(llm=llm)
/content/https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html
e1f8a56f6cf3-1
chain = LLMMathChain(llm=llm) predictions = chain.apply(dataset) numeric_output = [float(p['answer'].strip().strip("Answer: ")) for p in predictions] correct = [example['answer'] == numeric_output[i] for i, example in enumerate(dataset)] sum(correct) / len(correct) 1.0 for i, example in enumerate(dataset): print("input: ", example["question"]) print("expected output :", example["answer"]) print("prediction: ", numeric_output[i]) input: 5 expected output : 5.0 prediction: 5.0 input: 5 + 3 expected output : 8.0 prediction: 8.0 input: 2^3.171 expected output : 9.006708689094099 prediction: 9.006708689094099 input: 2 ^3.171 expected output : 9.006708689094099 prediction: 9.006708689094099 input: two to the power of three point one hundred seventy one expected output : 9.006708689094099 prediction: 9.006708689094099 input: five + three squared minus 1 expected output : 13.0 prediction: 13.0 input: 2097 times 27.31 expected output : 57269.07 prediction: 57269.07 input: two thousand ninety seven times twenty seven point thirty one expected output : 57269.07 prediction: 57269.07 input: 209758 / 2714 expected output : 77.28739867354459 prediction: 77.28739867354459 input: 209758.857 divided by 2714.31
/content/https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html
e1f8a56f6cf3-2
input: 209758.857 divided by 2714.31 expected output : 77.27888745205964 prediction: 77.27888745205964 previous Using Hugging Face Datasets next Evaluating an OpenAPI Chain Contents Setting up a chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/llm_math.html
1635a89ef050-0
.ipynb .pdf Question Answering Benchmarking: State of the Union Address Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance Question Answering Benchmarking: State of the Union Address# Here we go over how to benchmark performance on a question answering task over a state of the union address. It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up. # Comment this out if you are NOT using tracing import os os.environ["LANGCHAIN_HANDLER"] = "langchain" Loading the data# First, let’s load the data. from langchain.evaluation.loading import load_dataset dataset = load_dataset("question-answering-state-of-the-union") Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-state-of-the-union-a7e5a3b2db4f440d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) Setting up a chain# Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question. from langchain.document_loaders import TextLoader loader = TextLoader("../../modules/state_of_the_union.txt")
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
1635a89ef050-1
from langchain.indexes import VectorstoreIndexCreator vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now we can create a question answering chain. from langchain.chains import RetrievalQA from langchain.llms import OpenAI chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question") Make a prediction# First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints chain(dataset[0]) {'question': 'What is the purpose of the NATO Alliance?', 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.', 'result': ' The NATO Alliance was created to secure peace and stability in Europe after World War 2.'} Make many predictions# Now we can make predictions predictions = chain.apply(dataset) Evaluate performance# Now we can evaluate the predictions. The first thing we can do is look at them by eye. predictions[0] {'question': 'What is the purpose of the NATO Alliance?', 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
1635a89ef050-2
'result': ' The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'} Next, we can use a language model to score them programatically from langchain.evaluation.qa import QAEvalChain llm = OpenAI(temperature=0) eval_chain = QAEvalChain.from_llm(llm) graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result") We can add in the graded output to the predictions dict and then get a count of the grades. for i, prediction in enumerate(predictions): prediction['grade'] = graded_outputs[i]['text'] from collections import Counter Counter([pred['grade'] for pred in predictions]) Counter({' CORRECT': 7, ' INCORRECT': 4}) We can also filter the datapoints to the incorrect examples and look at them. incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"] incorrect[0] {'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?', 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.', 'result': ' The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and is naming a chief prosecutor for pandemic fraud.', 'grade': ' INCORRECT'} previous
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
1635a89ef050-3
'grade': ' INCORRECT'} previous Question Answering Benchmarking: Paul Graham Essay next QA Generation Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_benchmarking_sota.html
7d714c652d0b-0
.ipynb .pdf Generic Agent Evaluation Contents Setup Testing the Agent Evaluating the Agent Generic Agent Evaluation# Good evaluation is key for quickly iterating on your agent’s prompts and tools. Here we provide an example of how to use the TrajectoryEvalChain to evaluate your agent. Setup# Let’s start by defining our agent. from langchain import Wikipedia from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.agents.react.base import DocstoreExplorer from langchain.memory import ConversationBufferMemory from langchain import LLMMathChain from langchain.llms import OpenAI from langchain import SerpAPIWrapper docstore = DocstoreExplorer(Wikipedia()) math_llm = OpenAI(temperature=0) llm_math_chain = LLMMathChain(llm=math_llm, verbose=True) search = SerpAPIWrapper() tools = [ Tool( name="Search", func=docstore.search, description="useful for when you need to ask with search", ), Tool( name="Lookup", func=docstore.lookup, description="useful for when you need to ask with lookup", ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for doing calculations", ), Tool( name="Search the Web (SerpAPI)", func=search.run, description="useful for when you need to answer questions about current events", ), ] memory = ConversationBufferMemory(
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-1
), ] memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True, output_key="output" ) llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") agent = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, return_intermediate_steps=True, # This is needed for the evaluation later ) Testing the Agent# Now let’s try our agent out on some example queries. query_one = "How many ping pong balls would it take to fill the entire Empire State Building?" test_outputs_one = agent({"input": query_one}, return_only_outputs=False) > Entering new AgentExecutor chain... { "action": "Search the Web (SerpAPI)", "action_input": "How many ping pong balls would it take to fill the entire Empire State Building?" } Observation: 12.8 billion. The volume of the Empire State Building Googles in at around 37 million ft³. A golf ball comes in at about 2.5 in³. Thought:{ "action": "Final Answer", "action_input": "It would take approximately 12.8 billion ping pong balls to fill the entire Empire State Building." } > Finished chain. This looks good! Let’s try it out on another query. query_two = "If you laid the Eiffel Tower end to end, how many would you need cover the US from coast to coast?"
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-2
test_outputs_two = agent({"input": query_two}, return_only_outputs=False) > Entering new AgentExecutor chain... { "action": "Calculator", "action_input": "The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers." } > Entering new LLMMathChain chain... The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers. ```text 4828000 / 324 ``` ...numexpr.evaluate("4828000 / 324")... Answer: 14901.234567901234 > Finished chain. Observation: Answer: 14901.234567901234 Thought:{ "action": "Calculator",
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-3
Thought:{ "action": "Calculator", "action_input": "The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers." } > Entering new LLMMathChain chain... The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers. ```text 4828000 / 324 ``` ...numexpr.evaluate("4828000 / 324")... Answer: 14901.234567901234 > Finished chain. Observation: Answer: 14901.234567901234 Thought:{ "action": "Final Answer", "action_input": "If you laid the Eiffel Tower end to end, you would need approximately 14,901 Eiffel Towers to cover the US from coast to coast." } > Finished chain. This doesn’t look so good. Let’s try running some evaluation. Evaluating the Agent# Let’s start by defining the TrajectoryEvalChain. from langchain.evaluation.agents import TrajectoryEvalChain # Define chain eval_chain = TrajectoryEvalChain.from_llm(
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-4
eval_chain = TrajectoryEvalChain.from_llm( llm=ChatOpenAI(temperature=0, model_name="gpt-4"), # Note: This must be a ChatOpenAI model agent_tools=agent.tools, return_reasoning=True, ) Let’s try evaluating the first query. question, steps, answer = test_outputs_one["input"], test_outputs_one["intermediate_steps"], test_outputs_one["output"] evaluation = eval_chain( inputs={"question": question, "answer": answer, "agent_trajectory": eval_chain.get_agent_trajectory(steps)}, ) print("Score from 1 to 5: ", evaluation["score"]) print("Reasoning: ", evaluation["reasoning"]) Score from 1 to 5: 1 Reasoning: First, let's evaluate the final answer. The final answer is incorrect because it uses the volume of golf balls instead of ping pong balls. The answer is not helpful. Second, does the model use a logical sequence of tools to answer the question? The model only used one tool, which was the Search the Web (SerpAPI). It did not use the Calculator tool to calculate the correct volume of ping pong balls. Third, does the AI language model use the tools in a helpful way? The model used the Search the Web (SerpAPI) tool, but the output was not helpful because it provided information about golf balls instead of ping pong balls.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-5
Fourth, does the AI language model use too many steps to answer the question? The model used only one step, which is not too many. However, it should have used more steps to provide a correct answer. Fifth, are the appropriate tools used to answer the question? The model should have used the Search tool to find the volume of the Empire State Building and the volume of a ping pong ball. Then, it should have used the Calculator tool to calculate the number of ping pong balls needed to fill the building. Judgment: Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1. That seems about right. Let’s try the second query. question, steps, answer = test_outputs_two["input"], test_outputs_two["intermediate_steps"], test_outputs_two["output"] evaluation = eval_chain( inputs={"question": question, "answer": answer, "agent_trajectory": eval_chain.get_agent_trajectory(steps)}, ) print("Score from 1 to 5: ", evaluation["score"]) print("Reasoning: ", evaluation["reasoning"]) Score from 1 to 5: 3 Reasoning: i. Is the final answer helpful? Yes, the final answer is helpful as it provides an approximate number of Eiffel Towers needed to cover the US from coast to coast. ii. Does the AI language use a logical sequence of tools to answer the question?
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
7d714c652d0b-6
No, the AI language model does not use a logical sequence of tools. It directly uses the Calculator tool without first using the Search or Lookup tools to find the necessary information (length of the Eiffel Tower and distance from coast to coast in the US). iii. Does the AI language model use the tools in a helpful way? The AI language model uses the Calculator tool in a helpful way to perform the calculation, but it should have used the Search or Lookup tools first to find the required information. iv. Does the AI language model use too many steps to answer the question? No, the AI language model does not use too many steps. However, it repeats the same step twice, which is unnecessary. v. Are the appropriate tools used to answer the question? Not entirely. The AI language model should have used the Search or Lookup tools to find the required information before using the Calculator tool. Given the above evaluation, the AI language model's performance can be scored as follows: That also sounds about right. In conclusion, the TrajectoryEvalChain allows us to use GPT-4 to score both our agent’s outputs and tool use in addition to giving us the reasoning behind the evaluation. previous Data Augmented Question Answering next Using Hugging Face Datasets Contents Setup Testing the Agent Evaluating the Agent By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html
b8837be3c706-0
.ipynb .pdf QA Generation QA Generation# This notebook shows how to use the QAGenerationChain to come up with question-answer pairs over a specific document. This is important because often times you may not have data to evaluate your question-answer system over, so this is a cheap and lightweight way to generate it! from langchain.document_loaders import TextLoader loader = TextLoader("../../modules/state_of_the_union.txt") doc = loader.load()[0] from langchain.chat_models import ChatOpenAI from langchain.chains import QAGenerationChain chain = QAGenerationChain.from_llm(ChatOpenAI(temperature = 0)) qa = chain.run(doc.page_content) qa[1] {'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?', 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.'} previous Question Answering Benchmarking: State of the Union Address next Question Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/qa_generation.html
a0e149834054-0
.ipynb .pdf SQL Question Answering Benchmarking: Chinook Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance SQL Question Answering Benchmarking: Chinook# Here we go over how to benchmark performance on a question answering task over a SQL database. It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See here for an explanation of what tracing is and how to set it up. # Comment this out if you are NOT using tracing import os os.environ["LANGCHAIN_HANDLER"] = "langchain" Loading the data# First, let’s load the data. from langchain.evaluation.loading import load_dataset dataset = load_dataset("sql-qa-chinook") Downloading and preparing dataset json/LangChainDatasets--sql-qa-chinook to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
/content/https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
a0e149834054-1
Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data. dataset[0] {'question': 'How many employees are there?', 'answer': '8'} Setting up a chain# This uses the example Chinook database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository. Note that here we load a simple chain. If you want to experiment with more complex chains, or an agent, just create the chain object in a different way. from langchain import OpenAI, SQLDatabase, SQLDatabaseChain db = SQLDatabase.from_uri("sqlite:///../../../notebooks/Chinook.db") llm = OpenAI(temperature=0) Now we can create a SQL database chain. chain = SQLDatabaseChain(llm=llm, database=db, input_key="question") Make a prediction# First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints
/content/https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
a0e149834054-2
chain(dataset[0]) {'question': 'How many employees are there?', 'answer': '8', 'result': ' There are 8 employees.'} Make many predictions# Now we can make predictions. Note that we add a try-except because this chain can sometimes error (if SQL is written incorrectly, etc) predictions = [] predicted_dataset = [] error_dataset = [] for data in dataset: try: predictions.append(chain(data)) predicted_dataset.append(data) except: error_dataset.append(data) Evaluate performance# Now we can evaluate the predictions. We can use a language model to score them programatically from langchain.evaluation.qa import QAEvalChain llm = OpenAI(temperature=0) eval_chain = QAEvalChain.from_llm(llm) graded_outputs = eval_chain.evaluate(predicted_dataset, predictions, question_key="question", prediction_key="result") We can add in the graded output to the predictions dict and then get a count of the grades. for i, prediction in enumerate(predictions): prediction['grade'] = graded_outputs[i]['text'] from collections import Counter Counter([pred['grade'] for pred in predictions]) Counter({' CORRECT': 3, ' INCORRECT': 4}) We can also filter the datapoints to the incorrect examples and look at them. incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"] incorrect[0]
/content/https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
a0e149834054-3
incorrect[0] {'question': 'How many employees are also customers?', 'answer': 'None', 'result': ' 59 employees are also customers.', 'grade': ' INCORRECT'} previous Question Answering next Installation Contents Loading the data Setting up a chain Make a prediction Make many predictions Evaluate performance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/evaluation/sql_qa_benchmarking_chinook.html
5a826525fb45-0
.ipynb .pdf Voice Assistant Voice Assistant# This chain creates a clone of ChatGPT with a few modifications to make it a voice assistant. It uses the pyttsx3 and speech_recognition libraries to convert text to speech and speech to text respectively. The prompt template is also changed to make it more suitable for voice assistant use. from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate from langchain.memory import ConversationBufferWindowMemory template = """Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-1
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. {history} Human: {human_input} Assistant:""" prompt = PromptTemplate( input_variables=["history", "human_input"], template=template ) chatgpt_chain = LLMChain( llm=OpenAI(temperature=0), prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2), ) import speech_recognition as sr import pyttsx3 engine = pyttsx3.init() def listen(): r = sr.Recognizer() with sr.Microphone() as source: print('Calibrating...') r.adjust_for_ambient_noise(source, duration=5) # optional parameters to adjust microphone sensitivity # r.energy_threshold = 200 # r.pause_threshold=0.5 print('Okay, go!') while(1): text = '' print('listening now...') try: audio = r.listen(source, timeout=5, phrase_time_limit=30) print('Recognizing...') # whisper model options are found here: https://github.com/openai/whisper#available-models-and-languages # other speech recognition models are also available.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-2
# other speech recognition models are also available. text = r.recognize_whisper(audio, model='medium.en', show_dict=True, )['text'] except Exception as e: unrecognized_speech_text = f'Sorry, I didn\'t catch that. Exception was: {e}s' text = unrecognized_speech_text print(text) response_text = chatgpt_chain.predict(human_input=text) print(response_text) engine.say(response_text) engine.runAndWait() listen(None) Calibrating... Okay, go! listening now... Recognizing... C:\Users\jaden\AppData\Roaming\Python\Python310\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Hello, Assistant. What's going on? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-3
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on? Assistant: > Finished chain. Hi there! It's great to hear from you. I'm doing well. How can I help you today? listening now... Recognizing... That's cool. Isn't that neat? Yeah, I'm doing great. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-4
Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on? AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today? Human: That's cool. Isn't that neat? Yeah, I'm doing great.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-5
Assistant: > Finished chain. That's great to hear! What can I do for you today? listening now... Recognizing... Thank you. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Hello, Assistant. What's going on?
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-6
Human: Hello, Assistant. What's going on? AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today? Human: That's cool. Isn't that neat? Yeah, I'm doing great. AI: That's great to hear! What can I do for you today? Human: Thank you. Assistant: > Finished chain. You're welcome! Is there anything else I can help you with? listening now... Recognizing... I'd like to learn more about neural networks. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-7
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: That's cool. Isn't that neat? Yeah, I'm doing great. AI: That's great to hear! What can I do for you today? Human: Thank you. AI: You're welcome! Is there anything else I can help you with? Human: I'd like to learn more about neural networks. Assistant: > Finished chain. Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. listening now... Recognizing... Tell me a fun fact about neural networks. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-8
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Thank you. AI: You're welcome! Is there anything else I can help you with? Human: I'd like to learn more about neural networks. AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. Human: Tell me a fun fact about neural networks. Assistant: > Finished chain.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-9
Assistant: > Finished chain. Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. listening now... Recognizing... Tell me about a brand new discovered bird species. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-10
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: I'd like to learn more about neural networks. AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms. Human: Tell me a fun fact about neural networks. AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. Human: Tell me about a brand new discovered bird species. Assistant: > Finished chain. A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. listening now... Recognizing... Tell me a children's story about the importance of honesty and trust. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-11
Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Tell me a fun fact about neural networks.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-12
Human: Tell me a fun fact about neural networks. AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do. Human: Tell me about a brand new discovered bird species. AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. Human: Tell me a children's story about the importance of honesty and trust. Assistant: > Finished chain. Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. listening now... Recognizing... Wow, Assistant, that was a really good story. Congratulations! > Entering new LLMChain chain... Prompt after formatting:
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-13
> Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Tell me about a brand new discovered bird species.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-14
Human: Tell me about a brand new discovered bird species. AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years. Human: Tell me a children's story about the importance of honesty and trust. AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. Human: Wow, Assistant, that was a really good story. Congratulations! Assistant: > Finished chain. Thank you! I'm glad you enjoyed it. listening now... Recognizing... Thank you. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-15
Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Tell me a children's story about the importance of honesty and trust.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-16
Human: Tell me a children's story about the importance of honesty and trust. AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life. Human: Wow, Assistant, that was a really good story. Congratulations! AI: Thank you! I'm glad you enjoyed it. Human: Thank you. Assistant: > Finished chain. You're welcome! listening now... Recognizing... Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-17
Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Wow, Assistant, that was a really good story. Congratulations! AI: Thank you! I'm glad you enjoyed it. Human: Thank you. AI: You're welcome!
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-18
Human: Thank you. AI: You're welcome! Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Assistant: > Finished chain. Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. listening now... Recognizing... Our whole process of awesome is free. > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-19
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Thank you. AI: You're welcome! Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. Human: Our whole process of awesome is free. Assistant: > Finished chain. That's great! It's always nice to have access to free tools and resources. listening now... Recognizing... No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free? > Entering new LLMChain chain... Prompt after formatting: Assistant is a large language model trained by OpenAI.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-20
Prompt after formatting: Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-21
AI: Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software. Human: Our whole process of awesome is free. AI: That's great! It's always nice to have access to free tools and resources. Human: No, I meant to ask, are those options that you mentioned free? No, I meant to ask, are those options that you mentioned free? Assistant: > Finished chain. Yes, the online brands I mentioned are all free to use. Adobe Photoshop Express, Pixlr, and Fotor are all free to use, and Freq is a free music production platform. listening now... --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) Cell In[6], line 1 ----> 1 listen(None) Cell In[5], line 20, in listen(command_queue) 18 print('listening now...') 19 try: ---> 20 audio = r.listen(source, timeout=5, phrase_time_limit=30) 21 # audio = r.record(source,duration = 5) 22 print('Recognizing...') File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:523, in Recognizer.listen(self, source, timeout, phrase_time_limit, snowboy_configuration) 520 if phrase_time_limit and elapsed_time - phrase_start_time > phrase_time_limit: 521 break
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
5a826525fb45-22
521 break --> 523 buffer = source.stream.read(source.CHUNK) 524 if len(buffer) == 0: break # reached end of the stream 525 frames.append(buffer) File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\speech_recognition\__init__.py:199, in Microphone.MicrophoneStream.read(self, size) 198 def read(self, size): --> 199 return self.pyaudio_stream.read(size, exception_on_overflow=False) File c:\ProgramData\miniconda3\envs\lang\lib\site-packages\pyaudio\__init__.py:570, in PyAudio.Stream.read(self, num_frames, exception_on_overflow) 567 if not self._is_input: 568 raise IOError("Not input stream", 569 paCanNotReadFromAnOutputOnlyStream) --> 570 return pa.read_stream(self._stream, num_frames, 571 exception_on_overflow) KeyboardInterrupt: By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html
41fc07588e8a-0
.ipynb .pdf Question answering over a group chat messages Contents 1. Install required packages 2. Add API keys 2. Create sample data 3. Ingest chat embeddings 4. Ask questions Question answering over a group chat messages# In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to semantically search and ask questions over a group chat. View a working demo here 1. Install required packages# !python3 -m pip install --upgrade langchain deeplake openai tiktoken 2. Add API keys# import os import getpass from langchain.document_loaders import PyPDFLoader, TextLoader from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter from langchain.vectorstores import DeepLake from langchain.chains import ConversationalRetrievalChain, RetrievalQA from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:') os.environ['ACTIVELOOP_ORG'] = getpass.getpass('Activeloop Org:') org = os.environ['ACTIVELOOP_ORG'] embeddings = OpenAIEmbeddings() dataset_path = 'hub://' + org + '/data' 2. Create sample data# You can generate a sample group chat conversation using ChatGPT with this prompt:
/content/https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
41fc07588e8a-1
You can generate a sample group chat conversation using ChatGPT with this prompt: Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible. I’ve already generated such a chat in messages.txt. We can keep it simple and use this for our example. 3. Ingest chat embeddings# We load the messages in the text file, chunk and upload to ActiveLoop Vector store. with open("messages.txt") as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) pages = text_splitter.split_text(state_of_the_union) text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100) texts = text_splitter.split_documents(pages) print (texts) dataset_path = 'hub://'+org+'/data' embeddings = OpenAIEmbeddings() db = DeepLake.from_documents(texts, embeddings, dataset_path=dataset_path, overwrite=True) 4. Ask questions# Now we can ask a question and get an answer back with a semantic search: db = DeepLake(dataset_path=dataset_path, read_only=True, embedding_function=embeddings) retriever = db.as_retriever() retriever.search_kwargs['distance_metric'] = 'cos'
/content/https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
41fc07588e8a-2
retriever.search_kwargs['k'] = 4 qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False) # What was the restaurant the group was talking about called? query = input("Enter query:") # The Hungry Lobster ans = qa({"query": query}) print(ans) Contents 1. Install required packages 2. Add API keys 2. Create sample data 3. Ingest chat embeddings 4. Ask questions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
76223b70a2a5-0
.md .pdf Quickstart Guide Contents Installation Environment Setup Building a Language Model Application: LLMs LLMs: Get predictions from a language model Prompt Templates: Manage prompts for LLMs Chains: Combine LLMs and prompts in multi-step workflows Agents: Dynamically Call Chains Based on User Input Memory: Add State to Chains and Agents Building a Language Model Application: Chat Models Get Message Completions from a Chat Model Chat Prompt Templates Chains with Chat Models Agents with Chat Models Memory: Add State to Chains and Agents Quickstart Guide# This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain. Installation# To get started, install LangChain with the following command: pip install langchain # or conda install langchain -c conda-forge Environment Setup# Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc. For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK: pip install openai We will then need to set the environment variable in the terminal. export OPENAI_API_KEY="..." Alternatively, you could do this from inside the Jupyter notebook (or Python script): import os os.environ["OPENAI_API_KEY"] = "..." Building a Language Model Application: LLMs# Now that we have installed LangChain and set up our environment, we can start building our language model application.
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-1
LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications. LLMs: Get predictions from a language model# The most basic building block of LangChain is calling an LLM on some input. Let’s walk through a simple example of how to do this. For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes. In order to do this, we first need to import the LLM wrapper. from langchain.llms import OpenAI We can then initialize the wrapper with any arguments. In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature. llm = OpenAI(temperature=0.9) We can now call it on some input! text = "What would be a good company name for a company that makes colorful socks?" print(llm(text)) Feetful of Fun For more details on how to use LLMs within LangChain, see the LLM getting started guide. Prompt Templates: Manage prompts for LLMs# Calling an LLM is a great first step, but it’s just the beginning. Normally when you use an LLM in an application, you are not sending user input directly to the LLM. Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM. For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks. In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-2
This is easy to do with LangChain! First lets define the prompt template: from langchain.prompts import PromptTemplate prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) Let’s now see how this works! We can call the .format method to format it. print(prompt.format(product="colorful socks")) What is a good name for a company that makes colorful socks? For more details, check out the getting started guide for prompts. Chains: Combine LLMs and prompts in multi-step workflows# Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them. A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains. The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM. Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. from langchain.prompts import PromptTemplate from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-3
from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) Now we can run that chain only specifying the product! chain.run("colorful socks") # -> '\n\nSocktastic!' There we go! There’s the first chain - an LLM Chain. This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains. For more details, check out the getting started guide for chains. Agents: Dynamically Call Chains Based on User Input# So far the chains we’ve looked at run in a predetermined order. Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user. When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API. In order to load agents, you should understand the following concepts: Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output. LLM: The language model powering the agent. Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-4
Agents: For a list of supported agents and their specifications, see here. Tools: For a list of predefined tools and their specifications, see here. For this example, you will also need to install the SerpAPI Python package. pip install google-search-results And set the appropriate environment variables. import os os.environ["SERPAPI_API_KEY"] = "..." Now we can get started! from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. llm = OpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") > Entering new AgentExecutor chain... I need to find the temperature first, then use the calculator to raise it to the .023 power. Action: Search Action Input: "High temperature in SF yesterday"
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-5
Action: Search Action Input: "High temperature in SF yesterday" Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ... Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power. Action: Calculator Action Input: 57^.023 Observation: Answer: 1.0974509573251117 Thought: I now know the final answer Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117. > Finished chain. Memory: Add State to Chains and Agents# So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper. LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-6
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt). from langchain import OpenAI, ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain(llm=llm, verbose=True) output = conversation.predict(input="Hi there!") print(output) > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished chain. ' Hello! How are you today?' output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.") print(output) > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hello! How are you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! What would you like to talk about?" Building a Language Model Application: Chat Models#
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-7
Building a Language Model Application: Chat Models# Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs. Chat model APIs are fairly new, so we are still figuring out the correct abstractions. Get Message Completions from a Chat Model# You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage. from langchain.chat_models import ChatOpenAI from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get completions by passing in a single message. chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models. messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages)
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-8
] chat(messages) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter: batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result # -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}}) You can recover things like token usage from this LLMResult: result.llm_output['token_usage']
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-9
result.llm_output['token_usage'] # -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89} Chat Prompt Templates# Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-10
# get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) Chains with Chat Models# The LLMChain discussed in the above section can be used with chat models as well: from langchain.chat_models import ChatOpenAI from langchain import LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") # -> "J'aime programmer." Agents with Chat Models#
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-11
# -> "J'aime programmer." Agents with Chat Models# Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") > Entering new AgentExecutor chain... Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power. Action: { "action": "Search", "action_input": "Olivia Wilde boyfriend" }
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-12
"action_input": "Olivia Wilde boyfriend" } Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought:I need to use a search engine to find Harry Styles' current age. Action: { "action": "Search", "action_input": "Harry Styles age" } Observation: 29 years Thought:Now I need to calculate 29 raised to the 0.23 power. Action: { "action": "Calculator", "action_input": "29^0.23" } Observation: Answer: 2.169459462491557 Thought:I now know the final answer. Final Answer: 2.169459462491557 > Finished chain. '2.169459462491557' Memory: Add State to Chains and Agents# You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate ) from langchain.chains import ConversationChain from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationBufferMemory prompt = ChatPromptTemplate.from_messages([
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-13
prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) llm = ChatOpenAI(temperature=0) memory = ConversationBufferMemory(return_messages=True) conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm) conversation.predict(input="Hi there!") # -> 'Hello! How can I assist you today?' conversation.predict(input="I'm doing well! Just having a conversation with an AI.") # -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?" conversation.predict(input="Tell me about yourself.") # -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?" previous Welcome to LangChain next Models Contents Installation Environment Setup Building a Language Model Application: LLMs LLMs: Get predictions from a language model Prompt Templates: Manage prompts for LLMs
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
76223b70a2a5-14
Prompt Templates: Manage prompts for LLMs Chains: Combine LLMs and prompts in multi-step workflows Agents: Dynamically Call Chains Based on User Input Memory: Add State to Chains and Agents Building a Language Model Application: Chat Models Get Message Completions from a Chat Model Chat Prompt Templates Chains with Chat Models Agents with Chat Models Memory: Add State to Chains and Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/getting_started/getting_started.html
64e1c0801cb5-0
.rst .pdf Memory Memory# Note Conceptual Guide By default, Chains and Agents are stateless, meaning that they treat each incoming query independently (as are the underlying LLMs and chat models). In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions, both at a short term but also at a long term level. The concept of “Memory” exists to do exactly that. LangChain provides memory components in two forms. First, LangChain provides helper utilities for managing and manipulating previous chat messages. These are designed to be modular and useful regardless of how they are used. Secondly, LangChain provides easy ways to incorporate these utilities into chains. The following sections of documentation are provided: Getting Started: An overview of how to get started with different types of memory. How-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains. Memory Getting Started How-To Guides previous Weaviate Hybrid Search next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/memory.html
ccee5bb2a730-0
.rst .pdf Chains Chains# Note Conceptual Guide Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with each other or with other experts. LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use. The following sections of documentation are provided: Getting Started: A getting started guide for chains, to get you up and running quickly. How-To Guides: A collection of how-to guides. These highlight how to use various types of chains. Reference: API reference documentation for all Chain classes. previous Redis Chat Message History next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/chains.html
35de4076c5e9-0
.rst .pdf Indexes Contents Go Deeper Indexes# Note Conceptual Guide Indexes refer to ways to structure documents so that LLMs can best interact with them. This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains. The most common way that indexes are used in chains is in a “retrieval” step. This step refers to taking a user’s query and returning the most relevant documents. We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents. We therefore have a concept of a “Retriever” interface - this is the interface that most chains work with. Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents). For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality. The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore a lot of the functionality we dive deep on those topics. For an overview of everything related to this, please see the below notebook for getting started: Getting Started We then provide a deep dive on the four main components. Document Loaders How to load documents from a variety of sources. Text Splitters An overview of the abstractions and implementions around splitting text. VectorStores An overview of VectorStores and the many integrations LangChain provides. Retrievers An overview of Retrievers and the implementations LangChain provides. Go Deeper# Document Loaders
/content/https://python.langchain.com/en/latest/modules/indexes.html
35de4076c5e9-1
Go Deeper# Document Loaders Text Splitters Vectorstores Retrievers previous Structured Output Parser next Getting Started Contents Go Deeper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/indexes.html
a95e8f37182a-0
.rst .pdf Agents Contents Go Deeper Agents# Note Conceptual Guide Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user’s input. In these types of chains, there is a “agent” which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call. In this section of documentation, we first start with a Getting Started notebook to cover how to use all things related to agents in an end-to-end manner. We then split the documentation into the following sections: Tools An overview of the various tools LangChain supports. Agents An overview of the different agent types. Toolkits An overview of toolkits, and examples of the different ones LangChain supports. Agent Executor An overview of the Agent Executor class and examples of how to use it. Go Deeper# Tools Agents Toolkits Agent Executors previous Chains next Getting Started Contents Go Deeper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/agents.html
1e2a7caf9bf6-0
.rst .pdf Models Contents Go Deeper Models# Note Conceptual Guide This section of the documentation deals with different types of models that are used in LangChain. On this page we will go over the model types at a high level, but we have individual pages for each model type. The pages contain more detailed “how-to” guides for working with that model, as well as a list of different model providers. LLMs Large Language Models (LLMs) are the first type of models we cover. These models take a text string as input, and return a text string as output. Chat Models Chat Models are the second type of models we cover. These models are usually backed by a language model, but their APIs are more structured. Specifically, these models take a list of Chat Messages as input, and return a Chat Message. Text Embedding Models The third type of models we cover are text embedding models. These models take text as input and return a list of floats. Go Deeper# LLMs Chat Models Text Embedding Models previous Quickstart Guide next LLMs Contents Go Deeper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/models.html
a9cacdad1570-0
.rst .pdf Prompts Contents Go Deeper Prompts# Note Conceptual Guide The new way of programming models is through prompts. A “prompt” refers to the input to the model. This input is rarely hard coded, but rather is often constructed from multiple components. A PromptTemplate is responsible for the construction of this input. LangChain provides several classes and functions to make constructing and working with prompts easy. This section of documentation is split into four sections: LLM Prompt Templates How to use PromptTemplates to prompt Language Models. Chat Prompt Templates How to use PromptTemplates to prompt Chat Models. Example Selectors Often times it is useful to include examples in prompts. These examples can be hardcoded, but it is often more powerful if they are dynamically selected. This section goes over example selection. Output Parsers Language models (and Chat Models) output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. Output Parsers are responsible for (1) instructing the model how output should be formatted, (2) parsing output into the desired formatting (including retrying if necessary). Go Deeper# Prompt Templates Chat Prompt Template Example Selectors Output Parsers previous TensorflowHub next Prompt Templates Contents Go Deeper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/prompts.html
ab48e5322750-0
.rst .pdf LLMs LLMs# Note Conceptual Guide Large Language Models (LLMs) are a core component of LangChain. LangChain is not a provider of LLMs, but rather provides a standard interface through which you can interact with a variety of LLMs. The following sections of documentation are provided: Getting Started: An overview of all the functionality the LangChain LLM class provides. How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc). Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc). Reference: API reference documentation for all LLM classes. previous Models next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/models/llms.html
3aa44f1b5395-0
.rst .pdf Text Embedding Models Text Embedding Models# Note Conceptual Guide This documentation goes over how to use the Embedding class in LangChain. The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). The following integrations exist for text embeddings. Aleph Alpha AzureOpenAI Cohere Fake Embeddings Hugging Face Hub InstructEmbeddings Jina Llama-cpp OpenAI SageMaker Endpoint Embeddings Self Hosted Embeddings Sentence Transformers Embeddings TensorflowHub previous PromptLayer ChatOpenAI next Aleph Alpha By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 26, 2023.
/content/https://python.langchain.com/en/latest/modules/models/text_embedding.html