id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
e2ed6b9e33d4-1
data = loader.load() data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] loader = IFixitLoader("https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself") data = loader.load() data
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-2
[Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself,
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-3
same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-4
to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-5
to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-6
took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-7
I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-8
that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-9
solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-10
up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-11
below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-12
laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-13
come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-14
way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-15
in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-16
all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-17
loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad") data = loader.load() data [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)] Searching iFixit using /suggest# If you’re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents. data = IFixitLoader.load_suggestions("Banana") data
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-18
data = IFixitLoader.load_suggestions("Banana") data [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy” or “insane”.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0),
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
e2ed6b9e33d4-19
Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] previous HTML next Images Contents Searching iFixit using /suggest By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/ifixit.html
409c08f6eb27-0
.ipynb .pdf Async API for LLM Async API for LLM# LangChain provides async support for LLMs by leveraging the asyncio library. Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, only OpenAI and PromptLayerOpenAI are supported, but async support for other LLMs is on the roadmap. You can use the agenerate method to call an OpenAI LLM asynchronously. import time import asyncio from langchain.llms import OpenAI def generate_serially(): llm = OpenAI(temperature=0.9) for _ in range(10): resp = llm.generate(["Hello, how are you?"]) print(resp.generations[0][0].text) async def async_generate(llm): resp = await llm.agenerate(["Hello, how are you?"]) print(resp.generations[0][0].text) async def generate_concurrently(): llm = OpenAI(temperature=0.9) tasks = [async_generate(llm) for _ in range(10)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') I'm doing well, thank you. How about you?
https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html
409c08f6eb27-1
I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, how about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you! How about you? I'm doing well, thank you. How about you? Concurrent executed in 1.39 seconds. I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? I'm doing well, thanks! How about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How about yourself? I'm doing well, thanks for asking. How about you? Serial executed in 5.77 seconds. previous Writer next Streaming with LLMs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html
8885a2a093f2-0
.md .pdf Key Concepts Contents LLMs Generation LLMResult Key Concepts# LLMs# Wrappers around Large Language Models (in particular, the “generate” ability of large language models) are at the core of LangChain functionality. The core method that these classes expose is a generate method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings). Read more about LLMResult. This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider, providing speed and efficiency gains. For convenience, this class also exposes a simpler, more user friendly interface (via __call__). The interface for this takes in a single string, and returns a single string. Generation# The output of a single generation. Currently in LangChain this is just the generated text, although could be extended in the future to contain log probs or the like. LLMResult# The full output of a call to the generate method of the LLM class. Since the generate method takes as input a list of strings, this returns a list of results. Each result consists of a list of generations (since you can request N generations per input string). This also contains a llm_output attribute which contains provider-specific information about the call. previous Getting Started next How-To Guides Contents LLMs Generation LLMResult By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/key_concepts.html
c0399b4503ef-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with LLMs. They are split into two categories: Generic Functionality: Covering generic functionality all LLMs should have. Integrations: Covering integrations with various LLM providers. Asynchronous: Covering asynchronous functionality. Streaming: Covering streaming functionality. previous Key Concepts next Generic Functionality By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/how_to_guides.html
35eede938cb9-0
.ipynb .pdf Getting Started Getting Started# This notebook goes over how to use the LLM class in LangChain. The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section. For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types. from langchain.llms import OpenAI llm = OpenAI(model_name="text-ada-001", n=2, best_of=2) Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string. llm("Tell me a joke") '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15) len(llm_result.generations) 30 llm_result.generations[0] [Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'), Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')] llm_result.generations[-1]
https://langchain.readthedocs.io/en/latest/modules/llms/getting_started.html
35eede938cb9-1
llm_result.generations[-1] [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."), Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')] You can also access provider specific information that is returned. This information is NOT standardized across providers. llm_result.llm_output {'token_usage': {'completion_tokens': 3903, 'total_tokens': 4023, 'prompt_tokens': 120}} Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is. Notice that by default the tokens are estimated using a HuggingFace tokenizer. llm.get_num_tokens("what a joke") 3 previous LLMs next Key Concepts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/getting_started.html
8b1357755a01-0
.rst .pdf Generic Functionality Generic Functionality# The examples here all address certain “how-to” guides for working with LLMs. LLM Serialization: A walkthrough of how to serialize LLMs to and from disk. LLM Caching: Covers different types of caches, and how to use a cache to save results of LLM calls. Custom LLM: How to create and use a custom LLM class, in case you have an LLM not from one of the standard providers (including one that you host yourself). Token Usage Tracking: How to track the token usage of various chains/agents/LLM calls. Fake LLM: How to create and use a fake LLM for testing and debugging purposes. previous How-To Guides next Custom LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/generic_how_to.html
14e91458cced-0
.ipynb .pdf Streaming with LLMs Streaming with LLMs# LangChain provides streaming support for LLMs. Currently, we only support streaming for the OpenAI and ChatOpenAI LLM implementation, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a CallbackHandler that implements on_llm_new_token. In this example, we are using StreamingStdOutCallbackHandler. from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import HumanMessage llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) resp = llm("Write me a song about sparkling water.") Verse 1 I'm sippin' on sparkling water, It's so refreshing and light, It's the perfect way to quench my thirst On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 2 I'm sippin' on sparkling water, It's so bubbly and bright, It's the perfect way to cool me down On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water,
https://langchain.readthedocs.io/en/latest/modules/llms/streaming_llm.html
14e91458cced-1
On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. We still have access to the end LLMResult if using generate. However, token_usage is not currently supported for streaming. llm.generate(["Tell me a joke."]) Q: What did the fish say when it hit the wall? A: Dam! LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}}) Here’s an example with ChatOpenAI: chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul
https://langchain.readthedocs.io/en/latest/modules/llms/streaming_llm.html
14e91458cced-2
Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous Async API for LLM next LLMs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/streaming_llm.html
aaff3357a0db-0
.rst .pdf Integrations Integrations# The examples here are all “how-to” guides for how to integrate with various LLM providers. OpenAI: Covers how to connect to OpenAI models. Cohere: Covers how to connect to Cohere models. AI21: Covers how to connect to AI21 models. Huggingface Hub: Covers how to connect to LLMs hosted on HuggingFace Hub. Azure OpenAI: Covers how to connect to Azure-hosted OpenAI Models. Manifest: Covers how to utilize the Manifest wrapper. Goose AI: Covers how to utilize the Goose AI wrapper. Writer: Covers how to utilize the Writer wrapper. Banana: Covers how to utilize the Banana wrapper. Modal: Covers how to utilize the Modal wrapper. StochasticAI: Covers how to utilize the Stochastic AI wrapper. Cerebrium: Covers how to utilize the Cerebrium AI wrapper. Petals: Covers how to utilize the Petals wrapper. Forefront AI: Covers how to utilize the Forefront AI wrapper. PromptLayer OpenAI: Covers how to use PromptLayer with LangChain. Anthropic: Covers how to use Anthropic models with LangChain. DeepInfra: Covers how to utilize the DeepInfra wrapper. Self-Hosted Models (via Runhouse): Covers how to run models on existing or on-demand remote compute with LangChain. previous Token Usage Tracking next AI21 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations.html
8d7573659681-0
.ipynb .pdf Hugging Face Hub Hugging Face Hub# This example showcases how to connect to the Hugging Face Hub. from langchain import PromptTemplate, HuggingFaceHub, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64})) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" print(llm_chain.run(question)) The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks. previous GooseAI LLM Example next Manifest By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/huggingface_hub.html
4f71c02f7769-0
.ipynb .pdf Writer Writer# This example goes over how to use LangChain to interact with Writer models from langchain.llms import Writer from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Writer() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous StochasticAI next Async API for LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/writer.html
6b4bb94fe35e-0
.ipynb .pdf Manifest Contents Compare HF Models Manifest# This notebook goes over how to use Manifest and LangChain. For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest from manifest import Manifest from langchain.llms.manifest import ManifestWrapper manifest = Manifest( client_name = "huggingface", client_connection = "http://127.0.0.1:5000" ) print(manifest.client.get_model_params()) {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B'} llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256}) # Map reduce example from langchain import PromptTemplate from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain _prompt = """Write a concise summary of the following: {text} CONCISE SUMMARY:""" prompt = PromptTemplate(template=_prompt, input_variables=["text"]) text_splitter = CharacterTextSplitter() mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter) with open('../state_of_the_union.txt') as f: state_of_the_union = f.read() mp_chain.run(state_of_the_union)
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/manifest.html
6b4bb94fe35e-1
state_of_the_union = f.read() mp_chain.run(state_of_the_union) 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your' Compare HF Models# from langchain.model_laboratory import ModelLaboratory manifest1 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01} ) manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01} ) manifest3 = ManifestWrapper(
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/manifest.html
6b4bb94fe35e-2
) manifest3 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5002" ), llm_kwargs={"temperature": 0.01} ) llms = [manifest1, manifest2, manifest3] model_lab = ModelLaboratory(llms) model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? ManifestWrapper Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01} pink ManifestWrapper Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01} A flamingo is a small, round ManifestWrapper Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01} pink previous Hugging Face Hub next Modal Contents Compare HF Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/manifest.html
a9a71ee2d736-0
.ipynb .pdf ForefrontAI LLM Example Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain ForefrontAI LLM Example# This notebook goes over how to use Langchain with ForefrontAI. Imports# import os from langchain.llms import ForefrontAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models. os.environ["FOREFRONTAI_API_KEY"] = "YOUR_KEY_HERE" Create the ForefrontAI instance# You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url. llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous DeepInfra LLM Example next GooseAI LLM Example Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/forefrontai_example.html
a9a71ee2d736-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/forefrontai_example.html
b09503d79905-0
.ipynb .pdf GooseAI LLM Example Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain GooseAI LLM Example# This notebook goes over how to use Langchain with GooseAI. Install openai# The openai package is required to use the GooseAI API. Install openai using pip3 install openai. $ pip3 install openai Imports# import os from langchain.llms import GooseAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models. os.environ["GOOSEAI_API_KEY"] = "YOUR_KEY_HERE" Create the GooseAI instance# You can specify different parameters such as the model name, max tokens generated, temperature, etc. llm = GooseAI() Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous ForefrontAI LLM Example next Hugging Face Hub Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/gooseai_example.html
b09503d79905-1
Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/gooseai_example.html
1c881bd9463b-0
.ipynb .pdf StochasticAI StochasticAI# This example goes over how to use LangChain to interact with StochasticAI models from langchain.llms import StochasticAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = StochasticAI(api_url="YOUR_API_URL") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Self-Hosted Models via Runhouse next Writer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/stochasticai.html
e29cda58bb51-0
.ipynb .pdf Cohere Cohere# This example goes over how to use LangChain to interact with Cohere models from langchain.llms import Cohere from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Cohere() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question)
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/cohere.html
e29cda58bb51-1
llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer" previous CerebriumAI LLM Example next DeepInfra LLM Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/cohere.html
c4de5e51a139-0
.ipynb .pdf Aleph Alpha Aleph Alpha# This example goes over how to use LangChain to interact with Aleph Alpha models from langchain.llms import AlephAlpha from langchain import PromptTemplate, LLMChain template = """Q: {question} A:""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is AI?" llm_chain.run(question) ' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n' previous AI21 next Anthropic By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/aleph_alpha.html
28849d9aff7d-0
.ipynb .pdf Self-Hosted Models via Runhouse Self-Hosted Models via Runhouse# This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda. For more information, see Runhouse or the Runhouse docs. from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM from langchain import PromptTemplate, LLMChain import runhouse as rh # For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='rh-a10x') template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/self_hosted_examples.html
28849d9aff7d-1
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber" You can also load more custom models through the SelfHostedHuggingFaceLLM interface: llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu, ) llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin' Using a custom load function, we can load a custom pipeline directly on the remote hardware: def load_pipeline(): from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipe def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"][len(prompt):] llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn) llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/self_hosted_examples.html
28849d9aff7d-2
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush' You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow: pipeline = load_pipeline() llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs ) Instead, we can also send it to the hardware’s filesystem, which will be much faster. rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu) previous SageMakerEndpoint next StochasticAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/self_hosted_examples.html
8e02ba5e179c-0
.ipynb .pdf AI21 AI21# This example goes over how to use LangChain to interact with AI21 models from langchain.llms import AI21 from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = AI21() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Integrations next Aleph Alpha By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/ai21.html
00c219e7fcb3-0
.ipynb .pdf SageMakerEndpoint SageMakerEndpoint# This notebooks goes over how to use an LLM hosted on a SageMaker endpoint. !pip3 install langchain boto3 from langchain.docstore.document import Document example_doc_1 = """ Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving. """ docs = [ Document( page_content=example_doc_1, ) ] from typing import Dict from langchain import PromptTemplate, SagemakerEndpoint from langchain.llms.sagemaker_endpoint import ContentHandlerBase from langchain.chains.question_answering import load_qa_chain import json query = """How long was Elizabeth hospitalized? """ prompt_template = """Use the following pieces of context to answer the question at the end. {context} Question: {question} Answer:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) class ContentHandler(ContentHandlerBase): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"] content_handler = ContentHandler()
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/sagemaker.html
00c219e7fcb3-1
return response_json[0]["generated_text"] content_handler = ContentHandler() chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature":1e-10}, content_handler=content_handler ), prompt=PROMPT ) chain({"input_documents": docs, "question": query}, return_only_outputs=True) previous PromptLayer OpenAI next Self-Hosted Models via Runhouse By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/sagemaker.html
f642e909fab8-0
.ipynb .pdf Modal Modal# This example goes over how to use LangChain to interact with Modal models from langchain.llms import Modal from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Modal(endpoint_url="YOUR_ENDPOINT_URL") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Manifest next OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/modal.html
3990c2fde161-0
.ipynb .pdf PromptLayer OpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer OpenAI# This example showcases how to connect to PromptLayer to start recording your OpenAI requests. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. pip install promptlayer Imports# import os from langchain.llms import PromptLayerOpenAI import promptlayer Set the Environment API Key# You can create a PromptLayer API Key at wwww.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. os.environ["PROMPTLAYER_API_KEY"] = "********" Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. llm = PromptLayerOpenAI(pl_tags=["langchain"]) llm("I am a cat and I want") ' to go outside\n\nUnfortunately, cats cannot go outside without being supervised by a human. Going outside can be dangerous for cats, as they may come into contact with cars, other animals, or other dangers. If you want to go outside, ask your human to take you on a supervised walk or to a safe, enclosed outdoor space.' The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True) llm_results = llm.generate(["Tell me a joke"])
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/promptlayer_openai.html
3990c2fde161-1
llm_results = llm.generate(["Tell me a joke"]) for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous Petals LLM Example next SageMakerEndpoint Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/promptlayer_openai.html
492bde049bb2-0
.ipynb .pdf DeepInfra LLM Example Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain DeepInfra LLM Example# This notebook goes over how to use Langchain with DeepInfra. Imports# import os from langchain.llms import DeepInfra from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from DeepInfra. You are given a 1 hour free of serverless GPU compute to test different models. You can print your token with deepctl auth token os.environ["DEEPINFRA_API_TOKEN"] = "YOUR_KEY_HERE" Create the DeepInfra instance# Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (for example) llm = DeepInfra(model_id="DEPLOYED MODEL ID") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in 2015?" llm_chain.run(question) previous Cohere next ForefrontAI LLM Example Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/deepinfra_example.html
492bde049bb2-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/deepinfra_example.html
475b4f4c6e03-0
.ipynb .pdf Banana Banana# This example goes over how to use LangChain to interact with Banana models import os from langchain.llms import Banana from langchain import PromptTemplate, LLMChain os.environ["BANANA_API_KEY"] = "YOUR_API_KEY" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Banana(model_key="YOUR_MODEL_KEY") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Azure OpenAI LLM Example next CerebriumAI LLM Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/banana.html
4d3979d3d030-0
.ipynb .pdf Anthropic Anthropic# This example goes over how to use LangChain to interact with Anthropic models from langchain.llms import Anthropic from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Anthropic() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) " Step 1: Justin Beiber was born on March 1, 1994\nStep 2: The NFL season ends with the Super Bowl in January/February\nStep 3: Therefore, the Super Bowl that occurred closest to Justin Beiber's birth would be Super Bowl XXIX in 1995\nStep 4: The San Francisco 49ers won Super Bowl XXIX in 1995\n\nTherefore, the answer is the San Francisco 49ers won the Super Bowl in the year Justin Beiber was born." previous Aleph Alpha next Azure OpenAI LLM Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/anthropic_example.html
dd6609294558-0
.ipynb .pdf OpenAI OpenAI# This example goes over how to use LangChain to interact with OpenAI models from langchain.llms import OpenAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = OpenAI() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in that year was the Dallas Cowboys.' previous Modal next Petals LLM Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/openai.html
206516565635-0
.ipynb .pdf CerebriumAI LLM Example Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain CerebriumAI LLM Example# This notebook goes over how to use Langchain with CerebriumAI. Install cerebrium# The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium. $ pip3 install cerebrium Imports# import os from langchain.llms import CerebriumAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from CerebriumAI. You are given a 1 hour free of serverless GPU compute to test different models. os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE" Create the CerebriumAI instance# You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url. llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Banana next
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/cerebriumai_example.html
206516565635-1
llm_chain.run(question) previous Banana next Cohere Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/cerebriumai_example.html
39e6521d2dfd-0
.ipynb .pdf Petals LLM Example Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiate the LLMChain Run the LLMChain Petals LLM Example# This notebook goes over how to use Langchain with Petals. Install petals# The petals package is required to use the Petals API. Install petals using pip3 install petals. $ pip3 install petals Imports# import os from langchain.llms import Petals from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from Huggingface. os.environ["HUGGINGFACE_API_KEY"] = "YOUR_KEY_HERE" Create the Petals instance# You can specify different parameters such as the model name, max new tokens, temperature, etc. llm = Petals(model_name="bigscience/bloom-petals") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous OpenAI next PromptLayer OpenAI Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/petals_example.html
39e6521d2dfd-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/petals_example.html
fec7d594338c-0
.ipynb .pdf Azure OpenAI LLM Example Contents API configuration Deployments Azure OpenAI LLM Example# This notebook goes over how to use Langchain with Azure OpenAI. The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below. API configuration# You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash: # Set this to `azure` export OPENAI_API_TYPE=azure # The API version you want to use: set this to `2022-12-01` for the released version. export OPENAI_API_VERSION=2022-12-01 # The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_BASE=https://your-resource-name.openai.azure.com # The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_KEY=<your Azure OpenAI API key> Alternatively, you can configure the API right within your running Python environment: import os os.environ["OPENAI_API_TYPE"] = "azure" ... Deployments# With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use. Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example: import openai response = openai.Completion.create(
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/azure_openai_example.html
fec7d594338c-1
import openai response = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5 ) # Import Azure OpenAI from langchain.llms import AzureOpenAI # Create an instance of Azure OpenAI # Replace the deployment name with your own llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002") # Run the LLM llm("Tell me a joke") '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' We can also print the LLM and see its custom print. print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} previous Anthropic next Banana Contents API configuration Deployments By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/integrations/azure_openai_example.html
bef95c9f9087-0
.ipynb .pdf Custom LLM Custom LLM# This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain. There is only one required thing that a custom LLM needs to implement: A _call method that takes in a string, some optional stop words, and returns a string There is a second optional thing it can implement: An _identifying_params property that is used to help with printing of this class. Should return a dictionary. Let’s implement a very simple custom LLM that just returns the first N characters of the input. from langchain.llms.base import LLM from typing import Optional, List, Mapping, Any class CustomLLM(LLM): n: int @property def _llm_type(self) -> str: return "custom" def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[:self.n] @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {"n": self.n} We can now use this as an any other LLM. llm = CustomLLM(n=10) llm("This is a foobar thing") 'This is a ' We can also print the LLM and see its custom print. print(llm) CustomLLM Params: {'n': 10} previous Generic Functionality next Fake LLM By Harrison Chase © Copyright 2023, Harrison Chase.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/custom_llm.html
bef95c9f9087-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/custom_llm.html
8e51f6dc3b6e-0
.ipynb .pdf LLM Serialization Contents Loading Saving LLM Serialization# This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc). from langchain.llms import OpenAI from langchain.llms.loading import load_llm Loading# First, lets go over loading a LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way. !cat llm.json { "model_name": "text-davinci-003", "temperature": 0.7, "max_tokens": 256, "top_p": 1.0, "frequency_penalty": 0.0, "presence_penalty": 0.0, "n": 1, "best_of": 1, "request_timeout": null, "_type": "openai" } llm = load_llm("llm.json") !cat llm.yaml _type: openai best_of: 1 frequency_penalty: 0.0 max_tokens: 256 model_name: text-davinci-003 n: 1 presence_penalty: 0.0 request_timeout: null temperature: 0.7 top_p: 1.0 llm = load_llm("llm.yaml") Saving# If you want to go from a LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and yaml. llm.save("llm.json")
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_serialization.html
8e51f6dc3b6e-1
llm.save("llm.json") llm.save("llm.yaml") previous LLM Caching next Token Usage Tracking Contents Loading Saving By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_serialization.html
853addbcc3f3-0
.ipynb .pdf Fake LLM Fake LLM# We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way. In this notebook we go over how to use this. We start this with using the FakeLLM in an agent. from langchain.llms.fake import FakeListLLM from langchain.agents import load_tools from langchain.agents import initialize_agent tools = load_tools(["python_repl"]) responses=[ "Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4" ] llm = FakeListLLM(responses=responses) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent.run("whats 2 + 2") > Entering new AgentExecutor chain... Action: Python REPL Action Input: print(2 + 2) Observation: 4 Thought:Final Answer: 4 > Finished chain. '4' previous Custom LLM next LLM Caching By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/fake_llm.html
fb411fc488fc-0
.ipynb .pdf LLM Caching Contents In Memory Cache SQLite Cache Redis Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains LLM Caching# This notebook covers how to cache results of individual LLM calls. from langchain.llms import OpenAI In Memory Cache# import langchain from langchain.cache import InMemoryCache langchain.llm_cache = InMemoryCache() # To make the caching really obvious, lets use a slower model. llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 30.7 ms, sys: 18.6 ms, total: 49.3 ms Wall time: 791 ms "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!" %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 80 µs, sys: 0 ns, total: 80 µs Wall time: 83.9 µs "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!" SQLite Cache# !rm .langchain.db # We can do the same thing with a SQLite cache from langchain.cache import SQLiteCache langchain.llm_cache = SQLiteCache(database_path=".langchain.db") %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke")
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html
fb411fc488fc-1
llm("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Redis Cache# # We can do the same thing with a Redis cache # (make sure your local Redis instance is running first before running this example) from redis import Redis from langchain.cache import RedisCache langchain.llm_cache = RedisCache(redis_=Redis()) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") %%time # The second time it is, so it goes faster llm("Tell me a joke") SQLAlchemy Cache# # You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy. # from langchain.cache import SQLAlchemyCache # from sqlalchemy import create_engine # engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") # langchain.llm_cache = SQLAlchemyCache(engine) Custom SQLAlchemy Schemas# # You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use: from sqlalchemy import Column, Integer, String, Computed, Index, Sequence from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html
fb411fc488fc-2
from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy_utils import TSVectorType from langchain.cache import SQLAlchemyCache Base = declarative_base() class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence('cache_id'), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True)) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), ) engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache) Optional Caching# You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False) %%time llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time llm("Tell me a joke")
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html
fb411fc488fc-3
%%time llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.' Optional Caching in Chains# You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards. As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step. llm = OpenAI(model_name="text-davinci-002") no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False) from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain text_splitter = CharacterTextSplitter() with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() texts = text_splitter.split_text(state_of_the_union) from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts[:3]] from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm) %%time chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html
fb411fc488fc-4
Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.' When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step. %%time chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.' previous Fake LLM next LLM Serialization Contents In Memory Cache SQLite Cache Redis Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html
0828591adb15-0
.ipynb .pdf Token Usage Tracking Token Usage Tracking# This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API. Let’s first look at an extremely simple example of tracking token usage for a single LLM call. from langchain.llms import OpenAI from langchain.callbacks import get_openai_callback llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) with get_openai_callback() as cb: result = llm("Tell me a joke") print(cb.total_tokens) 42 Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence. with get_openai_callback() as cb: result = llm("Tell me a joke") result2 = llm("Tell me a joke") print(cb.total_tokens) 83 If a chain or agent with multiple steps in it is used, it will track all those steps. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) with get_openai_callback() as cb: response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") print(cb.total_tokens) > Entering new AgentExecutor chain... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power. Action: Search
https://langchain.readthedocs.io/en/latest/modules/llms/examples/token_usage_tracking.html
0828591adb15-1
Action: Search Action Input: "Olivia Wilde boyfriend" Observation: Jason Sudeikis Thought: I need to find out Jason Sudeikis' age Action: Search Action Input: "Jason Sudeikis age" Observation: 47 years Thought: I need to calculate 47 raised to the 0.23 power Action: Calculator Action Input: 47^0.23 Observation: Answer: 2.4242784855673896 Thought: I now know the final answer Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896. > Finished chain. 1465 previous LLM Serialization next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/llms/examples/token_usage_tracking.html
df0af113f1b1-0
.md .pdf Key Concepts Contents Chains Sequential Chain Prompt Selectors Key Concepts# Chains# A chain is made up of links, which can be either primitives or other chains. They vary greatly in complexity and are combination of generic, highly configurable pipelines and more narrow (but usually more complex) pipelines. Sequential Chain# This is a specific type of chain where multiple other chains are run in sequence, with the outputs being added as inputs to the next. A subtype of this type of chain is the SimpleSequentialChain, where all subchains have only one input and one output, and the output of one is therefore used as sole input to the next chain. Prompt Selectors# One thing that we’ve noticed is that the best prompt to use is really dependent on the model you use. Some prompts work really good with some models, but not great with others. One of our goals is provide good chains that “just work” out of the box. A big part of chains like that is having prompts that “just work”. So rather than having a default prompt for chains, we are moving towards a paradigm where if a prompt is not explicitly provided we select one with a PromptSelector. This class takes in the model passed in, and returns a default prompt. The inner workings of the PromptSelector can look at any aspect of the model - LLM vs ChatModel, OpenAI vs Cohere, GPT3 vs GPT4, etc. Due to this being a newer feature, this may not be implemented for all chains, but this is the direction we are moving. previous Async API for Chain next Chains Contents Chains Sequential Chain Prompt Selectors By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/key_concepts.html
d57d64d9234d-0
.rst .pdf How-To Guides How-To Guides# A chain is made up of links, which can be either primitives or other chains. Primitives can be either prompts, llms, utils, or other chains. The examples here are all end-to-end chains for specific applications. They are broken up into three categories: Generic Chains: Generic chains, that are meant to help build other chains rather than serve a particular purpose. Utility Chains: Chains consisting of an LLMChain interacting with a specific util. Asynchronous: Covering asynchronous functionality. In addition to different types of chains, we also have the following how-to guides for working with chains in general: Load From Hub: This notebook covers how to load chains from LangChainHub. previous Getting Started next Generic Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/how_to_guides.html
d197a81a4053-0
.rst .pdf Utility Chains Utility Chains# A chain is made up of links, which can be either primitives or other chains. Primitives can be either prompts, llms, utils, or other chains. The examples here are all end-to-end chains for specific applications, focused on interacting an LLMChain with a specific utility. LLMMath Links Used: Python REPL, LLMChain Notes: This chain takes user input (a math question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result. Example Notebook PAL Links Used: Python REPL, LLMChain Notes: This chain takes user input (a reasoning question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result. Paper Example Notebook SQLDatabase Chain Links Used: SQLDatabase, LLMChain Notes: This chain takes user input (a question), uses a first LLM chain to construct a SQL query to run against the SQL database, and then uses another LLMChain to take the results of that query and use it to answer the original question. Example Notebook API Chain Links Used: LLMChain, Requests Notes: This chain first uses a LLM to construct the url to hit, then makes that request with the Requests wrapper, and finally runs that result through the language model again in order to product a natural language response. Example Notebook LLMBash Chain Links Used: BashProcess, LLMChain Notes: This chain takes user input (a question), uses an LLM chain to convert it to a bash command to run in the terminal, and then returns that as the result. Example Notebook LLMChecker Chain Links Used: LLMChain
https://langchain.readthedocs.io/en/latest/modules/chains/utility_how_to.html
d197a81a4053-1
Example Notebook LLMChecker Chain Links Used: LLMChain Notes: This chain takes user input (a question), uses an LLM chain to answer that question, and then uses other LLMChains to self-check that answer. Example Notebook LLMRequests Chain Links Used: Requests, LLMChain Notes: This chain takes a URL and other inputs, uses Requests to get the data at that URL, and then passes that along with the other inputs into an LLMChain to generate a response. The example included shows how to ask a question to Google - it firsts constructs a Google url, then fetches the data there, then passes that data + the original question into an LLMChain to get an answer. Example Notebook Moderation Chain Links Used: LLMChain, ModerationChain Notes: This chain shows how to use OpenAI’s content moderation endpoint to screen output, and shows how to connect this to an LLMChain. Example Notebook previous Transformation Chain next API Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/utility_how_to.html
e6adc41d17b7-0
.ipynb .pdf Getting Started Contents Why do we need chains? Query an LLM with the LLMChain Combine chains with the SequentialChain Create a custom chain with the Chain class Getting Started# In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it. In this tutorial, we will cover: Using a simple LLM chain Creating sequential chains Creating a custom chain Why do we need chains?# Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components. Query an LLM with the LLMChain# The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM. To use the LLMChain, first create a prompt template. from langchain.prompts import PromptTemplate from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM. from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) # Run the chain only specifying the input variable. print(chain.run("colorful socks")) Rainbow Socks Co.
https://langchain.readthedocs.io/en/latest/modules/chains/getting_started.html
e6adc41d17b7-1
print(chain.run("colorful socks")) Rainbow Socks Co. You can use a chat model in an LLMChain as well: from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate, ) human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ) ) chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt]) chat = ChatOpenAI(temperature=0.9) chain = LLMChain(llm=chat, prompt=chat_prompt_template) print(chain.run("colorful socks")) Rainbow Threads This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains. Combine chains with the SequentialChain# The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next. In this tutorial, our sequential chain will: First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name. Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below. second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a catchphrase for the following company: {company_name}", )
https://langchain.readthedocs.io/en/latest/modules/chains/getting_started.html
e6adc41d17b7-2
template="Write a catchphrase for the following company: {company_name}", ) chain_two = LLMChain(llm=llm, prompt=second_prompt) Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step. from langchain.chains import SimpleSequentialChain overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True) # Run the chain specifying only the input variable for the first chain. catchphrase = overall_chain.run("colorful socks") print(catchphrase) > Entering new SimpleSequentialChain chain... Cheerful Toes. "Spread smiles from your toes!" > Finished SimpleSequentialChain chain. "Spread smiles from your toes!" Create a custom chain with the Chain class# LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains. In order to create a custom chain: Start by subclassing the Chain class, Fill out the input_keys and output_keys properties, Add the _call method that shows how to execute the chain. These steps are demonstrated in the example below: from langchain.chains import LLMChain from langchain.chains.base import Chain from typing import Dict, List class ConcatenateChain(Chain): chain_1: LLMChain chain_2: LLMChain @property def input_keys(self) -> List[str]: # Union of the input keys of the two chains. all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys)) return list(all_input_vars) @property
https://langchain.readthedocs.io/en/latest/modules/chains/getting_started.html
e6adc41d17b7-3
return list(all_input_vars) @property def output_keys(self) -> List[str]: return ['concat_output'] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: output_1 = self.chain_1.run(inputs) output_2 = self.chain_2.run(inputs) return {'concat_output': output_1 + output_2} Now, we can try running the chain that we called. prompt_1 = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain_1 = LLMChain(llm=llm, prompt=prompt_1) prompt_2 = PromptTemplate( input_variables=["product"], template="What is a good slogan for a company that makes {product}?", ) chain_2 = LLMChain(llm=llm, prompt=prompt_2) concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2) concat_output = concat_chain.run("colorful socks") print(f"Concatenated output:\n{concat_output}") Concatenated output: Rainbow Socks Co. "Step Into Colorful Comfort!" That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains. previous Chains next How-To Guides Contents Why do we need chains? Query an LLM with the LLMChain Combine chains with the SequentialChain Create a custom chain with the Chain class By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/getting_started.html
3ab2359bfe6c-0
.rst .pdf Generic Chains Generic Chains# A chain is made up of links, which can be either primitives or other chains. Primitives can be either prompts, llms, utils, or other chains. The examples here are all generic end-to-end chains that are meant to be used to construct other chains rather than serving a specific purpose. LLMChain Links Used: PromptTemplate, LLM Notes: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output. Example Notebook Transformation Chain Links Used: TransformationChain Notes: This notebook shows how to use the Transformation Chain, which takes an arbitrary python function and applies it to inputs/outputs of other chains. Example Notebook Sequential Chain Links Used: Sequential Notes: This notebook shows how to combine calling multiple other chains in sequence. Example Notebook previous How-To Guides next Loading from LangChainHub By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/generic_how_to.html
a931f2a9384c-0
.ipynb .pdf Async API for Chain Async API for Chain# LangChain provides async support for Chains by leveraging the asyncio library. Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap. import asyncio import time from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain def generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp) async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp) async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks) s = time.perf_counter() # If running this outside of Jupyter, use asyncio.run(generate_concurrently()) await generate_concurrently() elapsed = time.perf_counter() - s
https://langchain.readthedocs.io/en/latest/modules/chains/async_chain.html
a931f2a9384c-1
await generate_concurrently() elapsed = time.perf_counter() - s print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') s = time.perf_counter() generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds. previous SQLite example next Key Concepts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/async_chain.html
301170efb649-0
.ipynb .pdf LLMRequestsChain LLMRequestsChain# Using the request library to get HTML results from a URL and then an LLM to parse results from langchain.llms import OpenAI from langchain.chains import LLMRequestsChain, LLMChain from langchain.prompts import PromptTemplate template = """Between >>> and <<< are the raw search result text from google. Extract the answer to the question '{query}' or say "not found" if the information is not contained. Use the format Extracted:<answer or "not found"> >>> {requests_result} <<< Extracted:""" PROMPT = PromptTemplate( input_variables=["query", "requests_result"], template=template, ) chain = LLMRequestsChain(llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT)) question = "What are the Three (3) biggest countries, and their respective sizes?" inputs = { "query": question, "url": "https://www.google.com/search?q=" + question.replace(" ", "+") } chain(inputs) {'query': 'What are the Three (3) biggest countries, and their respective sizes?', 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?', 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), United States (9,826,675 km²)'} previous LLM Math next LLMSummarizationCheckerChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_requests.html
9892213f8ba4-0
.ipynb .pdf LLM Math Contents Customize Prompt LLM Math# This notebook showcases using LLMs and Python REPLs to do complex word math problems. from langchain import OpenAI, LLMMathChain llm = OpenAI(temperature=0) llm_math = LLMMathChain(llm=llm, verbose=True) llm_math.run("What is 13 raised to the .3432 power?") > Entering new LLMMathChain chain... What is 13 raised to the .3432 power? ```python import math print(math.pow(13, .3432)) ``` Answer: 2.4116004626599237 > Finished chain. 'Answer: 2.4116004626599237\n' Customize Prompt# You can also customize the prompt that is used. Here is an example prompting it to use numpy from langchain.prompts.prompt import PromptTemplate _PROMPT_TEMPLATE = """You are GPT-3, and you can't do math. You can do basic math, and your memorization abilities are impressive, but you can't do any complex calculations that a human could not do in their head. You also have an annoying tendency to just make up highly specific, but wrong, answers. So we hooked you up to a Python 3 kernel, and now you can execute code. If you execute code, you must print out the final answer using the print function. You MUST use the python package numpy to answer your question. You must import numpy as np. Question: ${{Question with hard calculation.}} ```python ${{Code that prints what you need to know}} print(${{code}}) ``` ```output ${{Output of your code}} ``` Answer: ${{Answer}} Begin.
https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_math.html
9892213f8ba4-1
${{Output of your code}} ``` Answer: ${{Answer}} Begin. Question: What is 37593 * 67? ```python import numpy as np print(np.multiply(37593, 67)) ``` ```output 2518731 ``` Answer: 2518731 Question: {question}""" PROMPT = PromptTemplate(input_variables=["question"], template=_PROMPT_TEMPLATE) llm_math = LLMMathChain(llm=llm, prompt=PROMPT, verbose=True) llm_math.run("What is 13 raised to the .3432 power?") > Entering new LLMMathChain chain... What is 13 raised to the .3432 power? ```python import numpy as np print(np.power(13, .3432)) ``` Answer: 2.4116004626599237 > Finished chain. 'Answer: 2.4116004626599237\n' previous LLMCheckerChain next LLMRequestsChain Contents Customize Prompt By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/examples/llm_math.html
14685603d528-0
.ipynb .pdf SQLite example Contents Customize Prompt Return Intermediate Steps Choosing how to limit the number of rows returned Adding example rows from each table Custom Table Info SQLDatabaseSequentialChain SQLite example# This example showcases hooking up an LLM to answer questions over a database. This uses the example Chinook database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository. from langchain import OpenAI, SQLDatabase, SQLDatabaseChain db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db") llm = OpenAI(temperature=0) NOTE: For data-sensitive projects, you can specify return_direct=True in the SQLDatabaseChain initialization to directly return the output of the SQL query without any additional formatting. This prevents the LLM from seeing any contents within the database. Note, however, the LLM still has access to the database scheme (i.e. dialect, table and key names) by default. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:120: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees. > Finished chain.
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-1
Answer: There are 8 employees. > Finished chain. ' There are 8 employees.' Customize Prompt# You can also customize the prompt that is used. Here is an example prompting it to understand that foobar is the same as the Employee table from langchain.prompts.prompt import PromptTemplate _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Answer: "Final answer here" Only use the following tables: {table_info} If someone asks for the table foobar, they really mean the employee table. Question: {input}""" PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE ) db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True) db_chain.run("How many employees are there in the foobar table?") > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery: SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees in the foobar table. > Finished chain. ' There are 8 employees in the foobar table.' Return Intermediate Steps# You can also return the intermediate steps of the SQLDatabaseChain. This allows you to access the SQL statement that was generated, as well as the result of running that against the SQL Database. db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True)
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-2
result = db_chain("How many employees are there in the foobar table?") result["intermediate_steps"] > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery: SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees in the foobar table. > Finished chain. [' SELECT COUNT(*) FROM Employee;', '[(8,)]'] Choosing how to limit the number of rows returned# If you are querying for several rows of a table you can select the maximum number of results you want to get by using the ‘top_k’ parameter (default is 10). This is useful for avoiding query results that exceed the prompt max length or consume tokens unnecessarily. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True, top_k=3) db_chain.run("What are some example tracks by composer Johann Sebastian Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by composer Johann Sebastian Bach? SQLQuery: SELECT Name, Composer FROM Track WHERE Composer LIKE '%Johann Sebastian Bach%' LIMIT 3; SQLResult: [('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach')]
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-3
Answer: Some example tracks by composer Johann Sebastian Bach are 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', and 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude'. > Finished chain. ' Some example tracks by composer Johann Sebastian Bach are \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', and \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\'.' Adding example rows from each table# Sometimes, the format of the data is not obvious and it is optimal to include a sample of rows from the tables in the prompt to allow the LLM to understand the data before providing a final query. Here we will use this feature to let the LLM know that artists are saved with their full names by providing two rows from the Track table. db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2) The sample rows are added to the prompt after each corresponding table’s column information: print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER,
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-4
"MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */ /home/jon/projects/langchain/langchain/sql_database.py:135: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery: SELECT Name FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5;
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-5
SQLQuery: SELECT Name FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)] Answer: Some example tracks by Bach are 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'. > Finished chain. ' Some example tracks by Bach are \'American Woman\', \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\', and \'Toccata and Fugue in D Minor, BWV 565: I. Toccata\'.' Custom Table Info#
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-6
Custom Table Info# In some cases, it can be useful to provide custom table information instead of using the automatically generated table definitions and the first sample_rows_in_table_info sample rows. For example, if you know that the first few rows of a table are uninformative, it could help to manually provide example rows that are more diverse or provide more information to the model. It is also possible to limit the columns that will be visible to the model if there are unnecessary columns. This information can be provided as a dictionary with table names as the keys and table information as the values. For example, let’s provide a custom definition and sample rows for the Track table with only a few columns: custom_table_info = { "Track": """CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */""" } db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track', 'Playlist'], sample_rows_in_table_info=2, custom_table_info=custom_table_info) print(db.table_info) CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId") ) /* 2 rows from Playlist table: PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track (
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-7
PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */ Note how our custom table definition and sample rows for Track overrides the sample_rows_in_table_info parameter. Tables that are not overridden by custom_table_info, in this example Playlist, will have their table info gathered automatically as usual. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery: SELECT Name, Composer FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman', 'B. Cummings/G. Peterson/M.J. Kale/R. Bachman'), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach'), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata', 'Johann Sebastian Bach')]
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-8
Answer: Some example tracks by Bach are 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'. > Finished chain. ' Some example tracks by Bach are \'American Woman\', \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\', and \'Toccata and Fugue in D Minor, BWV 565: I. Toccata\'.' SQLDatabaseSequentialChain# Chain for querying SQL database that is a sequential chain. The chain is as follows: 1. Based on the query, determine which tables to use. 2. Based on those tables, call the normal SQL database chain. This is useful in cases where the number of tables in the database is large. from langchain.chains import SQLDatabaseSequentialChain db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db") chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True) chain.run("How many employees are also customers?") > Entering new SQLDatabaseSequentialChain chain... Table names to use: ['Customer', 'Employee'] > Entering new SQLDatabaseChain chain... How many employees are also customers?
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
14685603d528-9
> Entering new SQLDatabaseChain chain... How many employees are also customers? SQLQuery: SELECT COUNT(*) FROM Employee INNER JOIN Customer ON Employee.EmployeeId = Customer.SupportRepId; SQLResult: [(59,)] Answer: 59 employees are also customers. > Finished chain. > Finished chain. ' 59 employees are also customers.' previous PAL next Async API for Chain Contents Customize Prompt Return Intermediate Steps Choosing how to limit the number of rows returned Adding example rows from each table Custom Table Info SQLDatabaseSequentialChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains/examples/sqlite.html
343660a0e518-0
.ipynb .pdf PAL Contents Math Prompt Colored Objects Intermediate Steps PAL# Implements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf. from langchain.chains import PALChain from langchain import OpenAI llm = OpenAI(model_name='code-davinci-002', temperature=0, max_tokens=512) Math Prompt# pal_chain = PALChain.from_math_prompt(llm, verbose=True) question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?" pal_chain.run(question) > Entering new PALChain chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28' Colored Objects# pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True) question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?" pal_chain.run(question) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2
https://langchain.readthedocs.io/en/latest/modules/chains/examples/pal.html