Unnamed: 0
int64 0
4.66k
| page content
stringlengths 23
2k
| description
stringlengths 8
925
| output
stringlengths 38
2.93k
|
---|---|---|---|
3,100 | 'id': 'ZPuESCQAAAAJ', 'link': 'https://scholar.google.com/citations?user=ZPuESCQAAAAJ&hl=en&oi=sra'}, {'name': 'Q Yuan', 'id': 'B059m2EAAAAJ', 'link': 'https://scholar.google.com/citations?user=B059m2EAAAAJ&hl=en&oi=sra'}]}, {'position': 7, 'title': 'Large language models in machine translation', 'data_cid': 'sY5m_Y3-0Y4J', 'link': 'http://research.google/pubs/pub33278.pdf', 'publication': 'T Brants, AC Popat, P Xu, FJ Och, J Dean ' '- 2007 - research.google', 'snippet': '… the benefits of largescale statistical ' 'language modeling in ma… trillion tokens, ' 'resulting in language models having up to ' '300 … is inexpensive to train on large data ' 'sets and approaches the …', 'type': 'PDF', 'inline_links': {'cited_by': {'cites_id': '10291286509313494705', 'total': 737, 'link': 'https://scholar.google.com/scholar?cites=10291286509313494705&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '10291286509313494705', 'total': 31, 'link': 'https://scholar.google.com/scholar?cluster=10291286509313494705&hl=en&as_sdt=0,33'}, 'related_articles_link': | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: 'id': 'ZPuESCQAAAAJ', 'link': 'https://scholar.google.com/citations?user=ZPuESCQAAAAJ&hl=en&oi=sra'}, {'name': 'Q Yuan', 'id': 'B059m2EAAAAJ', 'link': 'https://scholar.google.com/citations?user=B059m2EAAAAJ&hl=en&oi=sra'}]}, {'position': 7, 'title': 'Large language models in machine translation', 'data_cid': 'sY5m_Y3-0Y4J', 'link': 'http://research.google/pubs/pub33278.pdf', 'publication': 'T Brants, AC Popat, P Xu, FJ Och, J Dean ' '- 2007 - research.google', 'snippet': '… the benefits of largescale statistical ' 'language modeling in ma… trillion tokens, ' 'resulting in language models having up to ' '300 … is inexpensive to train on large data ' 'sets and approaches the …', 'type': 'PDF', 'inline_links': {'cited_by': {'cites_id': '10291286509313494705', 'total': 737, 'link': 'https://scholar.google.com/scholar?cites=10291286509313494705&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '10291286509313494705', 'total': 31, 'link': 'https://scholar.google.com/scholar?cluster=10291286509313494705&hl=en&as_sdt=0,33'}, 'related_articles_link': |
3,101 | 'related_articles_link': 'https://scholar.google.com/scholar?q=related:sY5m_Y3-0Y4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:sY5m_Y3-0Y4J:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'research.google', 'format': 'PDF', 'link': 'http://research.google/pubs/pub33278.pdf'}, 'authors': [{'name': 'FJ Och', 'id': 'ITGdg6oAAAAJ', 'link': 'https://scholar.google.com/citations?user=ITGdg6oAAAAJ&hl=en&oi=sra'}, {'name': 'J Dean', 'id': 'NMS69lQAAAAJ', 'link': 'https://scholar.google.com/citations?user=NMS69lQAAAAJ&hl=en&oi=sra'}]}, {'position': 8, 'title': 'A watermark for large language models', 'data_cid': 'BlSyLHT4iiEJ', 'link': 'https://arxiv.org/abs/2301.10226', 'publication': 'J Kirchenbauer, J Geiping, Y Wen, J ' 'Katz… - arXiv preprint arXiv …, 2023 - ' 'arxiv.org', 'snippet': '… To derive this watermark, we examine what ' 'happens in the language model just before it ' 'produces a probability vector. The last ' 'layer of the language model outputs a vector ' 'of logits l(t). …', 'inline_links': {'cited_by': {'cites_id': '2417017327887471622', | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: 'related_articles_link': 'https://scholar.google.com/scholar?q=related:sY5m_Y3-0Y4J:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:sY5m_Y3-0Y4J:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'research.google', 'format': 'PDF', 'link': 'http://research.google/pubs/pub33278.pdf'}, 'authors': [{'name': 'FJ Och', 'id': 'ITGdg6oAAAAJ', 'link': 'https://scholar.google.com/citations?user=ITGdg6oAAAAJ&hl=en&oi=sra'}, {'name': 'J Dean', 'id': 'NMS69lQAAAAJ', 'link': 'https://scholar.google.com/citations?user=NMS69lQAAAAJ&hl=en&oi=sra'}]}, {'position': 8, 'title': 'A watermark for large language models', 'data_cid': 'BlSyLHT4iiEJ', 'link': 'https://arxiv.org/abs/2301.10226', 'publication': 'J Kirchenbauer, J Geiping, Y Wen, J ' 'Katz… - arXiv preprint arXiv …, 2023 - ' 'arxiv.org', 'snippet': '… To derive this watermark, we examine what ' 'happens in the language model just before it ' 'produces a probability vector. The last ' 'layer of the language model outputs a vector ' 'of logits l(t). …', 'inline_links': {'cited_by': {'cites_id': '2417017327887471622', |
3,102 | {'cites_id': '2417017327887471622', 'total': 104, 'link': 'https://scholar.google.com/scholar?cites=2417017327887471622&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '2417017327887471622', 'total': 4, 'link': 'https://scholar.google.com/scholar?cluster=2417017327887471622&hl=en&as_sdt=0,33'}, 'related_articles_link': 'https://scholar.google.com/scholar?q=related:BlSyLHT4iiEJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:BlSyLHT4iiEJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'arxiv.org', 'format': 'PDF', 'link': 'https://arxiv.org/pdf/2301.10226.pdf?curius=1419'}, 'authors': [{'name': 'J Kirchenbauer', 'id': '48GJrbsAAAAJ', 'link': 'https://scholar.google.com/citations?user=48GJrbsAAAAJ&hl=en&oi=sra'}, {'name': 'J Geiping', 'id': '206vNCEAAAAJ', 'link': 'https://scholar.google.com/citations?user=206vNCEAAAAJ&hl=en&oi=sra'}, {'name': 'Y Wen', 'id': 'oUYfjg0AAAAJ', 'link': 'https://scholar.google.com/citations?user=oUYfjg0AAAAJ&hl=en&oi=sra'}, {'name': 'J Katz', 'id': 'yPw4WjoAAAAJ', | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: {'cites_id': '2417017327887471622', 'total': 104, 'link': 'https://scholar.google.com/scholar?cites=2417017327887471622&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '2417017327887471622', 'total': 4, 'link': 'https://scholar.google.com/scholar?cluster=2417017327887471622&hl=en&as_sdt=0,33'}, 'related_articles_link': 'https://scholar.google.com/scholar?q=related:BlSyLHT4iiEJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:BlSyLHT4iiEJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'arxiv.org', 'format': 'PDF', 'link': 'https://arxiv.org/pdf/2301.10226.pdf?curius=1419'}, 'authors': [{'name': 'J Kirchenbauer', 'id': '48GJrbsAAAAJ', 'link': 'https://scholar.google.com/citations?user=48GJrbsAAAAJ&hl=en&oi=sra'}, {'name': 'J Geiping', 'id': '206vNCEAAAAJ', 'link': 'https://scholar.google.com/citations?user=206vNCEAAAAJ&hl=en&oi=sra'}, {'name': 'Y Wen', 'id': 'oUYfjg0AAAAJ', 'link': 'https://scholar.google.com/citations?user=oUYfjg0AAAAJ&hl=en&oi=sra'}, {'name': 'J Katz', 'id': 'yPw4WjoAAAAJ', |
3,103 | 'id': 'yPw4WjoAAAAJ', 'link': 'https://scholar.google.com/citations?user=yPw4WjoAAAAJ&hl=en&oi=sra'}]}, {'position': 9, 'title': 'ChatGPT and other large language models are ' 'double-edged swords', 'data_cid': 'So0q8TRvxhYJ', 'link': 'https://pubs.rsna.org/doi/full/10.1148/radiol.230163', 'publication': 'Y Shen, L Heacock, J Elias, KD Hentel, B ' 'Reig, G Shih… - Radiology, 2023 - ' 'pubs.rsna.org', 'snippet': '… Large Language Models (LLMs) are deep ' 'learning models trained to understand and ' 'generate natural language. Recent studies ' 'demonstrated that LLMs achieve great success ' 'in a …', 'inline_links': {'cited_by': {'cites_id': '1641121387398204746', 'total': 231, 'link': 'https://scholar.google.com/scholar?cites=1641121387398204746&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '1641121387398204746', 'total': 3, 'link': 'https://scholar.google.com/scholar?cluster=1641121387398204746&hl=en&as_sdt=0,33'}, 'related_articles_link': 'https://scholar.google.com/scholar?q=related:So0q8TRvxhYJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33'}, 'authors': [{'name': 'Y Shen', | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: 'id': 'yPw4WjoAAAAJ', 'link': 'https://scholar.google.com/citations?user=yPw4WjoAAAAJ&hl=en&oi=sra'}]}, {'position': 9, 'title': 'ChatGPT and other large language models are ' 'double-edged swords', 'data_cid': 'So0q8TRvxhYJ', 'link': 'https://pubs.rsna.org/doi/full/10.1148/radiol.230163', 'publication': 'Y Shen, L Heacock, J Elias, KD Hentel, B ' 'Reig, G Shih… - Radiology, 2023 - ' 'pubs.rsna.org', 'snippet': '… Large Language Models (LLMs) are deep ' 'learning models trained to understand and ' 'generate natural language. Recent studies ' 'demonstrated that LLMs achieve great success ' 'in a …', 'inline_links': {'cited_by': {'cites_id': '1641121387398204746', 'total': 231, 'link': 'https://scholar.google.com/scholar?cites=1641121387398204746&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '1641121387398204746', 'total': 3, 'link': 'https://scholar.google.com/scholar?cluster=1641121387398204746&hl=en&as_sdt=0,33'}, 'related_articles_link': 'https://scholar.google.com/scholar?q=related:So0q8TRvxhYJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33'}, 'authors': [{'name': 'Y Shen', |
3,104 | 'Y Shen', 'id': 'XaeN2zgAAAAJ', 'link': 'https://scholar.google.com/citations?user=XaeN2zgAAAAJ&hl=en&oi=sra'}, {'name': 'L Heacock', 'id': 'tYYM5IkAAAAJ', 'link': 'https://scholar.google.com/citations?user=tYYM5IkAAAAJ&hl=en&oi=sra'}]}, {'position': 10, 'title': 'Pythia: A suite for analyzing large language ' 'models across training and scaling', 'data_cid': 'aaIDvsMAD8QJ', 'link': 'https://proceedings.mlr.press/v202/biderman23a.html', 'publication': 'S Biderman, H Schoelkopf… - ' 'International …, 2023 - ' 'proceedings.mlr.press', 'snippet': '… large language models, we prioritize ' 'consistency in model … out the most ' 'performance from each model. For example, we ' '… models, as it is becoming widely used for ' 'the largest models, …', 'inline_links': {'cited_by': {'cites_id': '14127511396791067241', 'total': 89, 'link': 'https://scholar.google.com/scholar?cites=14127511396791067241&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '14127511396791067241', 'total': 3, 'link': 'https://scholar.google.com/scholar?cluster=14127511396791067241&hl=en&as_sdt=0,33'}, | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: 'Y Shen', 'id': 'XaeN2zgAAAAJ', 'link': 'https://scholar.google.com/citations?user=XaeN2zgAAAAJ&hl=en&oi=sra'}, {'name': 'L Heacock', 'id': 'tYYM5IkAAAAJ', 'link': 'https://scholar.google.com/citations?user=tYYM5IkAAAAJ&hl=en&oi=sra'}]}, {'position': 10, 'title': 'Pythia: A suite for analyzing large language ' 'models across training and scaling', 'data_cid': 'aaIDvsMAD8QJ', 'link': 'https://proceedings.mlr.press/v202/biderman23a.html', 'publication': 'S Biderman, H Schoelkopf… - ' 'International …, 2023 - ' 'proceedings.mlr.press', 'snippet': '… large language models, we prioritize ' 'consistency in model … out the most ' 'performance from each model. For example, we ' '… models, as it is becoming widely used for ' 'the largest models, …', 'inline_links': {'cited_by': {'cites_id': '14127511396791067241', 'total': 89, 'link': 'https://scholar.google.com/scholar?cites=14127511396791067241&as_sdt=5,33&sciodt=0,33&hl=en'}, 'versions': {'cluster_id': '14127511396791067241', 'total': 3, 'link': 'https://scholar.google.com/scholar?cluster=14127511396791067241&hl=en&as_sdt=0,33'}, |
3,105 | 'related_articles_link': 'https://scholar.google.com/scholar?q=related:aaIDvsMAD8QJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:aaIDvsMAD8QJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'mlr.press', 'format': 'PDF', 'link': 'https://proceedings.mlr.press/v202/biderman23a/biderman23a.pdf'}, 'authors': [{'name': 'S Biderman', 'id': 'bO7H0DAAAAAJ', 'link': 'https://scholar.google.com/citations?user=bO7H0DAAAAAJ&hl=en&oi=sra'}, {'name': 'H Schoelkopf', 'id': 'XLahYIYAAAAJ', 'link': 'https://scholar.google.com/citations?user=XLahYIYAAAAJ&hl=en&oi=sra'}]}], 'related_searches': [{'query': 'large language models machine', 'highlighted': ['machine'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=1&q=large+language+models+machine&qst=ib'}, {'query': 'large language models pruning', 'highlighted': ['pruning'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=2&q=large+language+models+pruning&qst=ib'}, {'query': 'large language models multitask learners', 'highlighted': ['multitask learners'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=3&q=large+language+models+multitask+learners&qst=ib'}, {'query': 'large language models speech recognition', | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: 'related_articles_link': 'https://scholar.google.com/scholar?q=related:aaIDvsMAD8QJ:scholar.google.com/&scioq=Large+Language+Models&hl=en&as_sdt=0,33', 'cached_page_link': 'https://scholar.googleusercontent.com/scholar?q=cache:aaIDvsMAD8QJ:scholar.google.com/+Large+Language+Models&hl=en&as_sdt=0,33'}, 'resource': {'name': 'mlr.press', 'format': 'PDF', 'link': 'https://proceedings.mlr.press/v202/biderman23a/biderman23a.pdf'}, 'authors': [{'name': 'S Biderman', 'id': 'bO7H0DAAAAAJ', 'link': 'https://scholar.google.com/citations?user=bO7H0DAAAAAJ&hl=en&oi=sra'}, {'name': 'H Schoelkopf', 'id': 'XLahYIYAAAAJ', 'link': 'https://scholar.google.com/citations?user=XLahYIYAAAAJ&hl=en&oi=sra'}]}], 'related_searches': [{'query': 'large language models machine', 'highlighted': ['machine'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=1&q=large+language+models+machine&qst=ib'}, {'query': 'large language models pruning', 'highlighted': ['pruning'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=2&q=large+language+models+pruning&qst=ib'}, {'query': 'large language models multitask learners', 'highlighted': ['multitask learners'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=3&q=large+language+models+multitask+learners&qst=ib'}, {'query': 'large language models speech recognition', |
3,106 | models speech recognition', 'highlighted': ['speech recognition'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=4&q=large+language+models+speech+recognition&qst=ib'}, {'query': 'large language models machine translation', 'highlighted': ['machine translation'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=5&q=large+language+models+machine+translation&qst=ib'}, {'query': 'emergent abilities of large language models', 'highlighted': ['emergent abilities of'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=6&q=emergent+abilities+of+large+language+models&qst=ir'}, {'query': 'language models privacy risks', 'highlighted': ['privacy risks'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=7&q=language+models+privacy+risks&qst=ir'}, {'query': 'language model fine tuning', 'highlighted': ['fine tuning'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=8&q=language+model+fine+tuning&qst=ir'}], 'pagination': {'current': 1, 'next': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33', 'other_pages': {'2': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33', '3': 'https://scholar.google.com/scholar?start=20&q=Large+Language+Models&hl=en&as_sdt=0,33', '4': 'https://scholar.google.com/scholar?start=30&q=Large+Language+Models&hl=en&as_sdt=0,33', '5': | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: models speech recognition', 'highlighted': ['speech recognition'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=4&q=large+language+models+speech+recognition&qst=ib'}, {'query': 'large language models machine translation', 'highlighted': ['machine translation'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=5&q=large+language+models+machine+translation&qst=ib'}, {'query': 'emergent abilities of large language models', 'highlighted': ['emergent abilities of'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=6&q=emergent+abilities+of+large+language+models&qst=ir'}, {'query': 'language models privacy risks', 'highlighted': ['privacy risks'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=7&q=language+models+privacy+risks&qst=ir'}, {'query': 'language model fine tuning', 'highlighted': ['fine tuning'], 'link': 'https://scholar.google.com/scholar?hl=en&as_sdt=0,33&qsp=8&q=language+model+fine+tuning&qst=ir'}], 'pagination': {'current': 1, 'next': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33', 'other_pages': {'2': 'https://scholar.google.com/scholar?start=10&q=Large+Language+Models&hl=en&as_sdt=0,33', '3': 'https://scholar.google.com/scholar?start=20&q=Large+Language+Models&hl=en&as_sdt=0,33', '4': 'https://scholar.google.com/scholar?start=30&q=Large+Language+Models&hl=en&as_sdt=0,33', '5': |
3,107 | '5': 'https://scholar.google.com/scholar?start=40&q=Large+Language+Models&hl=en&as_sdt=0,33', '6': 'https://scholar.google.com/scholar?start=50&q=Large+Language+Models&hl=en&as_sdt=0,33', '7': 'https://scholar.google.com/scholar?start=60&q=Large+Language+Models&hl=en&as_sdt=0,33', '8': 'https://scholar.google.com/scholar?start=70&q=Large+Language+Models&hl=en&as_sdt=0,33', '9': 'https://scholar.google.com/scholar?start=80&q=Large+Language+Models&hl=en&as_sdt=0,33', '10': 'https://scholar.google.com/scholar?start=90&q=Large+Language+Models&hl=en&as_sdt=0,33'}}}PreviousSearch ToolsNextSearxNG SearchUsing as part of a Self Ask With Search ChainCustom parametersGetting results with metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. | This notebook shows examples of how to use SearchApi to search the web. Go to https://www.searchapi.io/ to sign up for a free account and get API key. ->: '5': 'https://scholar.google.com/scholar?start=40&q=Large+Language+Models&hl=en&as_sdt=0,33', '6': 'https://scholar.google.com/scholar?start=50&q=Large+Language+Models&hl=en&as_sdt=0,33', '7': 'https://scholar.google.com/scholar?start=60&q=Large+Language+Models&hl=en&as_sdt=0,33', '8': 'https://scholar.google.com/scholar?start=70&q=Large+Language+Models&hl=en&as_sdt=0,33', '9': 'https://scholar.google.com/scholar?start=80&q=Large+Language+Models&hl=en&as_sdt=0,33', '10': 'https://scholar.google.com/scholar?start=90&q=Large+Language+Models&hl=en&as_sdt=0,33'}}}PreviousSearch ToolsNextSearxNG SearchUsing as part of a Self Ask With Search ChainCustom parametersGetting results with metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,108 | OpenWeatherMap | 🦜�🔗 Langchain | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. ->: OpenWeatherMap | 🦜�🔗 Langchain |
3,109 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsOpenWeatherMapOn this pageOpenWeatherMapThis notebook goes over how to use the OpenWeatherMap component to fetch weather information.First, you need to sign up for an OpenWeatherMap API key:Go to OpenWeatherMap and sign up for an API key herepip install pyowmThen we will need to set some environment variables:Save your API KEY into OPENWEATHERMAP_API_KEY env variableUse the wrapper​from langchain.utilities import OpenWeatherMapAPIWrapperimport osos.environ["OPENWEATHERMAP_API_KEY"] = ""weather = OpenWeatherMapAPIWrapper()weather_data = weather.run("London,GB")print(weather_data) In London,GB, the current weather is as follows: Detailed status: broken clouds Wind speed: 2.57 m/s, direction: 240° Humidity: 55% Temperature: - Current: 20.12°C - High: 21.75°C - Low: 18.68°C - Feels like: 19.62°C Rain: {} Heat index: None Cloud cover: 75%Use the tool​from langchain.llms import OpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypeimport osos.environ["OPENAI_API_KEY"] = ""os.environ["OPENWEATHERMAP_API_KEY"] = ""llm = | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. ->: Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsOpenWeatherMapOn this pageOpenWeatherMapThis notebook goes over how to use the OpenWeatherMap component to fetch weather information.First, you need to sign up for an OpenWeatherMap API key:Go to OpenWeatherMap and sign up for an API key herepip install pyowmThen we will need to set some environment variables:Save your API KEY into OPENWEATHERMAP_API_KEY env variableUse the wrapper​from langchain.utilities import OpenWeatherMapAPIWrapperimport osos.environ["OPENWEATHERMAP_API_KEY"] = ""weather = OpenWeatherMapAPIWrapper()weather_data = weather.run("London,GB")print(weather_data) In London,GB, the current weather is as follows: Detailed status: broken clouds Wind speed: 2.57 m/s, direction: 240° Humidity: 55% Temperature: - Current: 20.12°C - High: 21.75°C - Low: 18.68°C - Feels like: 19.62°C Rain: {} Heat index: None Cloud cover: 75%Use the tool​from langchain.llms import OpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypeimport osos.environ["OPENAI_API_KEY"] = ""os.environ["OPENWEATHERMAP_API_KEY"] = ""llm = |
3,110 | ""os.environ["OPENWEATHERMAP_API_KEY"] = ""llm = OpenAI(temperature=0)tools = load_tools(["openweathermap-api"], llm)agent_chain = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent_chain.run("What's the weather like in London?") > Entering new AgentExecutor chain... I need to find out the current weather in London. Action: OpenWeatherMap Action Input: London,GB Observation: In London,GB, the current weather is as follows: Detailed status: broken clouds Wind speed: 2.57 m/s, direction: 240° Humidity: 56% Temperature: - Current: 20.11°C - High: 21.75°C - Low: 18.68°C - Feels like: 19.64°C Rain: {} Heat index: None Cloud cover: 75% Thought: I now know the current weather in London. Final Answer: The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None. > Finished chain. 'The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.'PreviousNuclia UnderstandingNextPubMedUse the wrapperUse the toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. | This notebook goes over how to use the OpenWeatherMap component to fetch weather information. ->: ""os.environ["OPENWEATHERMAP_API_KEY"] = ""llm = OpenAI(temperature=0)tools = load_tools(["openweathermap-api"], llm)agent_chain = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent_chain.run("What's the weather like in London?") > Entering new AgentExecutor chain... I need to find out the current weather in London. Action: OpenWeatherMap Action Input: London,GB Observation: In London,GB, the current weather is as follows: Detailed status: broken clouds Wind speed: 2.57 m/s, direction: 240° Humidity: 56% Temperature: - Current: 20.11°C - High: 21.75°C - Low: 18.68°C - Feels like: 19.64°C Rain: {} Heat index: None Cloud cover: 75% Thought: I now know the current weather in London. Final Answer: The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None. > Finished chain. 'The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.'PreviousNuclia UnderstandingNextPubMedUse the wrapperUse the toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,111 | Lemon Agent | ü¶úÔ∏èüîó Langchain | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. ->: Lemon Agent | ü¶úÔ∏èüîó Langchain |
3,112 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsLemon AgentOn this pageLemon AgentLemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github.See full docs here.Most connectors available today are focused on read-only operations, limiting the potential of LLMs. Agents, on the other hand, have a tendency to hallucinate from time to time due to missing context or instructions.With Lemon AI, it is possible to give your agents access to well-defined APIs for reliable read and write operations. In addition, Lemon AI functions allow you to further reduce the risk of hallucinations by providing a way to statically define workflows that the model can rely on in case of uncertainty.Quick Start‚ÄãThe following quick start demonstrates how to use Lemon AI in combination with Agents to automate workflows that involve interaction with internal tooling.1. Install Lemon AI‚ÄãRequires Python 3.8.1 and above.To use Lemon AI in your Python project run pip install lemonaiThis will install the corresponding Lemon AI client | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsLemon AgentOn this pageLemon AgentLemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github.See full docs here.Most connectors available today are focused on read-only operations, limiting the potential of LLMs. Agents, on the other hand, have a tendency to hallucinate from time to time due to missing context or instructions.With Lemon AI, it is possible to give your agents access to well-defined APIs for reliable read and write operations. In addition, Lemon AI functions allow you to further reduce the risk of hallucinations by providing a way to statically define workflows that the model can rely on in case of uncertainty.Quick Start‚ÄãThe following quick start demonstrates how to use Lemon AI in combination with Agents to automate workflows that involve interaction with internal tooling.1. Install Lemon AI‚ÄãRequires Python 3.8.1 and above.To use Lemon AI in your Python project run pip install lemonaiThis will install the corresponding Lemon AI client |
3,113 | will install the corresponding Lemon AI client which you can then import into your script.The tool uses Python packages langchain and loguru. In case of any installation errors with Lemon AI, install both packages first and then install the Lemon AI package.2. Launch the Server‚ÄãThe interaction of your agents and all tools provided by Lemon AI is handled by the Lemon AI Server. To use Lemon AI you need to run the server on your local machine so the Lemon AI Python client can connect to it.3. Use Lemon AI with Langchain‚ÄãLemon AI automatically solves given tasks by finding the right combination of relevant tools or uses Lemon AI Functions as an alternative. The following example demonstrates how to retrieve a user from Hackernews and write it to a table in Airtable:(Optional) Define your Lemon AI Functions‚ÄãSimilar to OpenAI functions, Lemon AI provides the option to define workflows as reusable functions. These functions can be defined for use cases where it is especially important to move as close as possible to near-deterministic behavior. Specific workflows can be defined in a separate lemonai.json:[ { "name": "Hackernews Airtable User Workflow", "description": "retrieves user data from Hackernews and appends it to a table in Airtable", "tools": ["hackernews-get-user", "airtable-append-data"] }]Your model will have access to these functions and will prefer them over self-selecting tools to solve a given task. All you have to do is to let the agent know that it should use a given function by including the function name in the prompt.Include Lemon AI in your Langchain project‚Äãimport osfrom lemonai import execute_workflowfrom langchain.llms import OpenAILoad API Keys and Access Tokens‚ÄãTo use tools that require authentication, you have to store the corresponding access credentials in your environment in the format "{tool name}_{authentication string}" where the authentication string is one of ["API_KEY", "SECRET_KEY", "SUBSCRIPTION_KEY", | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. ->: will install the corresponding Lemon AI client which you can then import into your script.The tool uses Python packages langchain and loguru. In case of any installation errors with Lemon AI, install both packages first and then install the Lemon AI package.2. Launch the Server‚ÄãThe interaction of your agents and all tools provided by Lemon AI is handled by the Lemon AI Server. To use Lemon AI you need to run the server on your local machine so the Lemon AI Python client can connect to it.3. Use Lemon AI with Langchain‚ÄãLemon AI automatically solves given tasks by finding the right combination of relevant tools or uses Lemon AI Functions as an alternative. The following example demonstrates how to retrieve a user from Hackernews and write it to a table in Airtable:(Optional) Define your Lemon AI Functions‚ÄãSimilar to OpenAI functions, Lemon AI provides the option to define workflows as reusable functions. These functions can be defined for use cases where it is especially important to move as close as possible to near-deterministic behavior. Specific workflows can be defined in a separate lemonai.json:[ { "name": "Hackernews Airtable User Workflow", "description": "retrieves user data from Hackernews and appends it to a table in Airtable", "tools": ["hackernews-get-user", "airtable-append-data"] }]Your model will have access to these functions and will prefer them over self-selecting tools to solve a given task. All you have to do is to let the agent know that it should use a given function by including the function name in the prompt.Include Lemon AI in your Langchain project‚Äãimport osfrom lemonai import execute_workflowfrom langchain.llms import OpenAILoad API Keys and Access Tokens‚ÄãTo use tools that require authentication, you have to store the corresponding access credentials in your environment in the format "{tool name}_{authentication string}" where the authentication string is one of ["API_KEY", "SECRET_KEY", "SUBSCRIPTION_KEY", |
3,114 | of ["API_KEY", "SECRET_KEY", "SUBSCRIPTION_KEY", "ACCESS_KEY"] for API keys or ["ACCESS_TOKEN", "SECRET_TOKEN"] for authentication tokens. Examples are "OPENAI_API_KEY", "BING_SUBSCRIPTION_KEY", "AIRTABLE_ACCESS_TOKEN".""" Load all relevant API Keys and Access Tokens into your environment variables """os.environ["OPENAI_API_KEY"] = "*INSERT OPENAI API KEY HERE*"os.environ["AIRTABLE_ACCESS_TOKEN"] = "*INSERT AIRTABLE TOKEN HERE*"hackernews_username = "*INSERT HACKERNEWS USERNAME HERE*"airtable_base_id = "*INSERT BASE ID HERE*"airtable_table_id = "*INSERT TABLE ID HERE*"""" Define your instruction to be given to your LLM """prompt = f"""Read information from Hackernews for user {hackernews_username} and then write the results toAirtable (baseId: {airtable_base_id}, tableId: {airtable_table_id}). Only write the fields "username", "karma"and "created_at_i". Please make sure that Airtable does NOT automatically convert the field types.""""""Use the Lemon AI execute_workflow wrapper to run your Langchain agent in combination with Lemon AI """model = OpenAI(temperature=0)execute_workflow(llm=model, prompt_string=prompt)4. Gain transparency on your Agent's decision making‚ÄãTo gain transparency on how your Agent interacts with Lemon AI tools to solve a given task, all decisions made, tools used and operations performed are written to a local lemonai.log file. Every time your LLM agent is interacting with the Lemon AI tool stack a corresponding log entry is created.2023-06-26T11:50:27.708785+0100 - b5f91c59-8487-45c2-800a-156eac0c7dae - hackernews-get-user2023-06-26T11:50:39.624035+0100 - b5f91c59-8487-45c2-800a-156eac0c7dae - airtable-append-data2023-06-26T11:58:32.925228+0100 - 5efe603c-9898-4143-b99a-55b50007ed9d - hackernews-get-user2023-06-26T11:58:43.988788+0100 - 5efe603c-9898-4143-b99a-55b50007ed9d - airtable-append-dataBy using the Lemon AI Analytics you can easily gain a better understanding of how frequently and in which order tools are used. As a result, you | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. ->: of ["API_KEY", "SECRET_KEY", "SUBSCRIPTION_KEY", "ACCESS_KEY"] for API keys or ["ACCESS_TOKEN", "SECRET_TOKEN"] for authentication tokens. Examples are "OPENAI_API_KEY", "BING_SUBSCRIPTION_KEY", "AIRTABLE_ACCESS_TOKEN".""" Load all relevant API Keys and Access Tokens into your environment variables """os.environ["OPENAI_API_KEY"] = "*INSERT OPENAI API KEY HERE*"os.environ["AIRTABLE_ACCESS_TOKEN"] = "*INSERT AIRTABLE TOKEN HERE*"hackernews_username = "*INSERT HACKERNEWS USERNAME HERE*"airtable_base_id = "*INSERT BASE ID HERE*"airtable_table_id = "*INSERT TABLE ID HERE*"""" Define your instruction to be given to your LLM """prompt = f"""Read information from Hackernews for user {hackernews_username} and then write the results toAirtable (baseId: {airtable_base_id}, tableId: {airtable_table_id}). Only write the fields "username", "karma"and "created_at_i". Please make sure that Airtable does NOT automatically convert the field types.""""""Use the Lemon AI execute_workflow wrapper to run your Langchain agent in combination with Lemon AI """model = OpenAI(temperature=0)execute_workflow(llm=model, prompt_string=prompt)4. Gain transparency on your Agent's decision making‚ÄãTo gain transparency on how your Agent interacts with Lemon AI tools to solve a given task, all decisions made, tools used and operations performed are written to a local lemonai.log file. Every time your LLM agent is interacting with the Lemon AI tool stack a corresponding log entry is created.2023-06-26T11:50:27.708785+0100 - b5f91c59-8487-45c2-800a-156eac0c7dae - hackernews-get-user2023-06-26T11:50:39.624035+0100 - b5f91c59-8487-45c2-800a-156eac0c7dae - airtable-append-data2023-06-26T11:58:32.925228+0100 - 5efe603c-9898-4143-b99a-55b50007ed9d - hackernews-get-user2023-06-26T11:58:43.988788+0100 - 5efe603c-9898-4143-b99a-55b50007ed9d - airtable-append-dataBy using the Lemon AI Analytics you can easily gain a better understanding of how frequently and in which order tools are used. As a result, you |
3,115 | in which order tools are used. As a result, you can identify weak spots in your agent’s decision-making capabilities and move to a more deterministic behavior by defining Lemon AI functions.PreviousIFTTT WebHooksNextMetaphor SearchQuick Start1. Install Lemon AI2. Launch the Server3. Use Lemon AI with Langchain4. Gain transparency on your Agent's decision makingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. | Lemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github. ->: in which order tools are used. As a result, you can identify weak spots in your agent’s decision-making capabilities and move to a more deterministic behavior by defining Lemon AI functions.PreviousIFTTT WebHooksNextMetaphor SearchQuick Start1. Install Lemon AI2. Launch the Server3. Use Lemon AI with Langchain4. Gain transparency on your Agent's decision makingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,116 | DuckDuckGo Search | ü¶úÔ∏èüîó Langchain | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: DuckDuckGo Search | ü¶úÔ∏èüîó Langchain |
3,117 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsDuckDuckGo SearchDuckDuckGo SearchThis notebook goes over how to use the duck-duck-go search component.# !pip install duckduckgo-searchfrom langchain.tools import DuckDuckGoSearchRunsearch = DuckDuckGoSearchRun()search.run("Obama's first name?") 'August 4, 1961 (age 61) Honolulu Hawaii Title / Office: presidency of the United States of America (2009-2017), United States United States Senate (2005-2008), United States ... (Show more) Political Affiliation: Democratic Party Awards And Honors: Barack Hussein Obama II (/ b …ô Àà r …ëÀê k h uÀê Àà s e…™ n o ä Àà b …ëÀê m …ô / b…ô-RAHK hoo-SAYN oh-BAH-m…ô; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States. Obama previously served as a U.S. senator representing Illinois ... Answer (1 of 12): I see others have answered President Obama\'s name which is "Barack Hussein Obama". President Obama has received many comments about his name from the racists across US. It is worth noting that he never changed | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsDuckDuckGo SearchDuckDuckGo SearchThis notebook goes over how to use the duck-duck-go search component.# !pip install duckduckgo-searchfrom langchain.tools import DuckDuckGoSearchRunsearch = DuckDuckGoSearchRun()search.run("Obama's first name?") 'August 4, 1961 (age 61) Honolulu Hawaii Title / Office: presidency of the United States of America (2009-2017), United States United States Senate (2005-2008), United States ... (Show more) Political Affiliation: Democratic Party Awards And Honors: Barack Hussein Obama II (/ b …ô Àà r …ëÀê k h uÀê Àà s e…™ n o ä Àà b …ëÀê m …ô / b…ô-RAHK hoo-SAYN oh-BAH-m…ô; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States. Obama previously served as a U.S. senator representing Illinois ... Answer (1 of 12): I see others have answered President Obama\'s name which is "Barack Hussein Obama". President Obama has received many comments about his name from the racists across US. It is worth noting that he never changed |
3,118 | US. It is worth noting that he never changed his name. Also, it is worth noting that a simple search would have re... What is Barack Obama\'s full name? Updated: 11/11/2022 Wiki User ‚àô 6y ago Study now See answer (1) Best Answer Copy His full, birth name is Barack Hussein Obama, II. He was named after his... Alex Oliveira July 24, 2023 4:57pm Updated 0 seconds of 43 secondsVolume 0% 00:00 00:43 The man who drowned while paddleboarding on a pond outside the Obamas\' Martha\'s Vineyard estate has been...'To get more additional information (e.g. link, source) use DuckDuckGoSearchResults()from langchain.tools import DuckDuckGoSearchResultssearch = DuckDuckGoSearchResults()search.run("Obama") "[snippet: Barack Hussein Obama II (/ b …ô Àà r …ëÀê k h uÀê Àà s e…™ n o ä Àà b …ëÀê m …ô / b…ô-RAHK hoo-SAYN oh-BAH-m…ô; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States. Obama previously served as a U.S. senator representing Illinois ..., title: Barack Obama - Wikipedia, link: https://en.wikipedia.org/wiki/Barack_Obama], [snippet: Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009-17) and the first African American to hold the office. Before winning the presidency, Obama represented Illinois in the U.S. Senate (2005-08). He was the third African American to be elected to that body ..., title: Barack Obama | Biography, Parents, Education, Presidency, Books ..., link: https://www.britannica.com/biography/Barack-Obama], [snippet: Barack Obama 's tenure as the 44th president of the United States began with his first inauguration on January 20, 2009, and ended on January 20, 2017. A Democrat from Illinois, Obama took office following a decisive victory over Republican nominee John McCain in the 2008 presidential election. Four | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: US. It is worth noting that he never changed his name. Also, it is worth noting that a simple search would have re... What is Barack Obama\'s full name? Updated: 11/11/2022 Wiki User ‚àô 6y ago Study now See answer (1) Best Answer Copy His full, birth name is Barack Hussein Obama, II. He was named after his... Alex Oliveira July 24, 2023 4:57pm Updated 0 seconds of 43 secondsVolume 0% 00:00 00:43 The man who drowned while paddleboarding on a pond outside the Obamas\' Martha\'s Vineyard estate has been...'To get more additional information (e.g. link, source) use DuckDuckGoSearchResults()from langchain.tools import DuckDuckGoSearchResultssearch = DuckDuckGoSearchResults()search.run("Obama") "[snippet: Barack Hussein Obama II (/ b …ô Àà r …ëÀê k h uÀê Àà s e…™ n o ä Àà b …ëÀê m …ô / b…ô-RAHK hoo-SAYN oh-BAH-m…ô; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States. Obama previously served as a U.S. senator representing Illinois ..., title: Barack Obama - Wikipedia, link: https://en.wikipedia.org/wiki/Barack_Obama], [snippet: Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009-17) and the first African American to hold the office. Before winning the presidency, Obama represented Illinois in the U.S. Senate (2005-08). He was the third African American to be elected to that body ..., title: Barack Obama | Biography, Parents, Education, Presidency, Books ..., link: https://www.britannica.com/biography/Barack-Obama], [snippet: Barack Obama 's tenure as the 44th president of the United States began with his first inauguration on January 20, 2009, and ended on January 20, 2017. A Democrat from Illinois, Obama took office following a decisive victory over Republican nominee John McCain in the 2008 presidential election. Four |
3,119 | McCain in the 2008 presidential election. Four years later, in the 2012 presidential ..., title: Presidency of Barack Obama - Wikipedia, link: https://en.wikipedia.org/wiki/Presidency_of_Barack_Obama], [snippet: First published on Mon 24 Jul 2023 20.03 EDT. Barack Obama's personal chef died while paddleboarding near the ex-president's home on Martha's Vineyard over the weekend, Massachusetts state ..., title: Obama's personal chef dies while paddleboarding off Martha's Vineyard ..., link: https://www.theguardian.com/us-news/2023/jul/24/tafari-campbell-barack-obama-chef-drowns-marthas-vineyard]"You can also just search for news articles. Use the keyword backend="news"search = DuckDuckGoSearchResults(backend="news")search.run("Obama") "[date: 2023-07-26T12:01:22, title: 'My heart is broken': Former Obama White House chef mourned following apparent drowning death in Edgartown, snippet: Tafari Campbell of Dumfries, Va., had been paddle boarding in Edgartown Great Pond when he appeared to briefly struggle, submerged, and did not return to the surface, authorities have said. Crews ultimately found the 45-year-old's body Monday morning., source: The Boston Globe on MSN.com, link: https://www.msn.com/en-us/news/us/my-heart-is-broken-former-obama-white-house-chef-mourned-following-apparent-drowning-death-in-edgartown/ar-AA1elNB8], [date: 2023-07-25T18:44:00, title: Obama's chef drowns paddleboarding near former president's Edgartown vacation home, snippet: Campbell was visiting Martha's Vineyard, where the Obamas own a vacation home. He was not wearing a lifejacket when he fell off his paddleboard., source: YAHOO!News, link: https://news.yahoo.com/obama-chef-drowns-paddleboarding-near-184437491.html], [date: 2023-07-26T00:30:00, title: Obama's personal chef dies while paddleboarding off Martha's Vineyard, snippet: Tafari Campbell, who worked at the White House during Obama's presidency, was visiting the island while the family was away, source: The Guardian, link: | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: McCain in the 2008 presidential election. Four years later, in the 2012 presidential ..., title: Presidency of Barack Obama - Wikipedia, link: https://en.wikipedia.org/wiki/Presidency_of_Barack_Obama], [snippet: First published on Mon 24 Jul 2023 20.03 EDT. Barack Obama's personal chef died while paddleboarding near the ex-president's home on Martha's Vineyard over the weekend, Massachusetts state ..., title: Obama's personal chef dies while paddleboarding off Martha's Vineyard ..., link: https://www.theguardian.com/us-news/2023/jul/24/tafari-campbell-barack-obama-chef-drowns-marthas-vineyard]"You can also just search for news articles. Use the keyword backend="news"search = DuckDuckGoSearchResults(backend="news")search.run("Obama") "[date: 2023-07-26T12:01:22, title: 'My heart is broken': Former Obama White House chef mourned following apparent drowning death in Edgartown, snippet: Tafari Campbell of Dumfries, Va., had been paddle boarding in Edgartown Great Pond when he appeared to briefly struggle, submerged, and did not return to the surface, authorities have said. Crews ultimately found the 45-year-old's body Monday morning., source: The Boston Globe on MSN.com, link: https://www.msn.com/en-us/news/us/my-heart-is-broken-former-obama-white-house-chef-mourned-following-apparent-drowning-death-in-edgartown/ar-AA1elNB8], [date: 2023-07-25T18:44:00, title: Obama's chef drowns paddleboarding near former president's Edgartown vacation home, snippet: Campbell was visiting Martha's Vineyard, where the Obamas own a vacation home. He was not wearing a lifejacket when he fell off his paddleboard., source: YAHOO!News, link: https://news.yahoo.com/obama-chef-drowns-paddleboarding-near-184437491.html], [date: 2023-07-26T00:30:00, title: Obama's personal chef dies while paddleboarding off Martha's Vineyard, snippet: Tafari Campbell, who worked at the White House during Obama's presidency, was visiting the island while the family was away, source: The Guardian, link: |
3,120 | the family was away, source: The Guardian, link: https://www.theguardian.com/us-news/2023/jul/24/tafari-campbell-barack-obama-chef-drowns-marthas-vineyard], [date: 2023-07-24T21:54:00, title: Obama's chef ID'd as paddleboarder who drowned near former president's Martha's Vineyard estate, snippet: Former President Barack Obama's personal chef, Tafari Campbell, has been identified as the paddle boarder who drowned near the Obamas' Martha's Vineyard estate., source: Fox News, link: https://www.foxnews.com/politics/obamas-chef-idd-paddleboarder-who-drowned-near-former-presidents-marthas-vineyard-estate]"You can also directly pass a custom DuckDuckGoSearchAPIWrapper to DuckDuckGoSearchResults. Therefore, you have much more control over the search results.from langchain.utilities import DuckDuckGoSearchAPIWrapperwrapper = DuckDuckGoSearchAPIWrapper(region="de-de", time="d", max_results=2)search = DuckDuckGoSearchResults(api_wrapper=wrapper, backend="news")search.run("Obama") '[date: 2023-07-25T12:15:00, title: Barack + Michelle Obama: Sie trauern um Angestellten, snippet: Barack und Michelle Obama trauern um ihren ehemaligen Küchenchef Tafari Campbell. Der Familienvater verunglückte am vergangenen Sonntag und wurde in einem Teich geborgen., source: Gala, link: https://www.gala.de/stars/news/barack---michelle-obama--sie-trauern-um-angestellten-23871228.html], [date: 2023-07-25T10:30:00, title: Barack Obama: Sein Koch (†45) ist tot - diese Details sind bekannt, snippet: Tafari Campbell war früher im Weißen Haus eingestellt, arbeitete anschließend weiter für Ex-Präsident Barack Obama. Nun ist er gestorben. Diese Details sind bekannt., source: T-Online, link: https://www.t-online.de/unterhaltung/stars/id_100213226/barack-obama-sein-koch-45-ist-tot-diese-details-sind-bekannt.html], [date: 2023-07-25T05:33:23, title: Barack Obama: Sein Privatkoch ist bei einem tragischen Unfall gestorben, snippet: Barack Obama (61) und Michelle Obama (59) sind in tiefer Trauer. Ihr | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: the family was away, source: The Guardian, link: https://www.theguardian.com/us-news/2023/jul/24/tafari-campbell-barack-obama-chef-drowns-marthas-vineyard], [date: 2023-07-24T21:54:00, title: Obama's chef ID'd as paddleboarder who drowned near former president's Martha's Vineyard estate, snippet: Former President Barack Obama's personal chef, Tafari Campbell, has been identified as the paddle boarder who drowned near the Obamas' Martha's Vineyard estate., source: Fox News, link: https://www.foxnews.com/politics/obamas-chef-idd-paddleboarder-who-drowned-near-former-presidents-marthas-vineyard-estate]"You can also directly pass a custom DuckDuckGoSearchAPIWrapper to DuckDuckGoSearchResults. Therefore, you have much more control over the search results.from langchain.utilities import DuckDuckGoSearchAPIWrapperwrapper = DuckDuckGoSearchAPIWrapper(region="de-de", time="d", max_results=2)search = DuckDuckGoSearchResults(api_wrapper=wrapper, backend="news")search.run("Obama") '[date: 2023-07-25T12:15:00, title: Barack + Michelle Obama: Sie trauern um Angestellten, snippet: Barack und Michelle Obama trauern um ihren ehemaligen Küchenchef Tafari Campbell. Der Familienvater verunglückte am vergangenen Sonntag und wurde in einem Teich geborgen., source: Gala, link: https://www.gala.de/stars/news/barack---michelle-obama--sie-trauern-um-angestellten-23871228.html], [date: 2023-07-25T10:30:00, title: Barack Obama: Sein Koch (†45) ist tot - diese Details sind bekannt, snippet: Tafari Campbell war früher im Weißen Haus eingestellt, arbeitete anschließend weiter für Ex-Präsident Barack Obama. Nun ist er gestorben. Diese Details sind bekannt., source: T-Online, link: https://www.t-online.de/unterhaltung/stars/id_100213226/barack-obama-sein-koch-45-ist-tot-diese-details-sind-bekannt.html], [date: 2023-07-25T05:33:23, title: Barack Obama: Sein Privatkoch ist bei einem tragischen Unfall gestorben, snippet: Barack Obama (61) und Michelle Obama (59) sind in tiefer Trauer. Ihr |
3,121 | Michelle Obama (59) sind in tiefer Trauer. Ihr Privatkoch Tafari Campbell ist am Montag (24. Juli) ums Leben gekommen, er wurde nur 45 Jahre alt. Laut US-Polizei starb er bei ein, source: BUNTE.de, link: https://www.msn.com/de-de/unterhaltung/other/barack-obama-sein-privatkoch-ist-bei-einem-tragischen-unfall-gestorben/ar-AA1ejrAd], [date: 2023-07-25T02:25:00, title: Barack Obama: Privatkoch tot in See gefunden, snippet: Tafari Campbell kochte für Barack Obama im Weißen Haus - und auch privat nach dessen Abschied aus dem Präsidentenamt. Nun machte die Polizei in einem Gewässer eine traurige Entdeckung., source: SPIEGEL, link: https://www.spiegel.de/panorama/justiz/barack-obama-leibkoch-tot-in-see-gefunden-a-3cdf6377-bee0-43f1-a200-a285742f9ffc]'PreviousDataForSeoNextEden AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook goes over how to use the duck-duck-go search component. | This notebook goes over how to use the duck-duck-go search component. ->: Michelle Obama (59) sind in tiefer Trauer. Ihr Privatkoch Tafari Campbell ist am Montag (24. Juli) ums Leben gekommen, er wurde nur 45 Jahre alt. Laut US-Polizei starb er bei ein, source: BUNTE.de, link: https://www.msn.com/de-de/unterhaltung/other/barack-obama-sein-privatkoch-ist-bei-einem-tragischen-unfall-gestorben/ar-AA1ejrAd], [date: 2023-07-25T02:25:00, title: Barack Obama: Privatkoch tot in See gefunden, snippet: Tafari Campbell kochte für Barack Obama im Weißen Haus - und auch privat nach dessen Abschied aus dem Präsidentenamt. Nun machte die Polizei in einem Gewässer eine traurige Entdeckung., source: SPIEGEL, link: https://www.spiegel.de/panorama/justiz/barack-obama-leibkoch-tot-in-see-gefunden-a-3cdf6377-bee0-43f1-a200-a285742f9ffc]'PreviousDataForSeoNextEden AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,122 | AWS Lambda | ü¶úÔ∏èüîó Langchain | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. ->: AWS Lambda | ü¶úÔ∏èüîó Langchain |
3,123 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsAWS LambdaAWS LambdaAmazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.This notebook goes over how to use the AWS Lambda Tool.By including a awslambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need.When an Agent uses the AWS Lambda tool, it will provide an argument of type string which will in turn be passed into the Lambda function via the event parameter.First, you need to install boto3 python package.pip install boto3 > /dev/nullIn order for an agent to use the tool, you must provide it with the name and description that match the functionality of you lambda function's logic. You must also provide the name of your function. Note that because this tool is | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsAWS LambdaAWS LambdaAmazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.This notebook goes over how to use the AWS Lambda Tool.By including a awslambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need.When an Agent uses the AWS Lambda tool, it will provide an argument of type string which will in turn be passed into the Lambda function via the event parameter.First, you need to install boto3 python package.pip install boto3 > /dev/nullIn order for an agent to use the tool, you must provide it with the name and description that match the functionality of you lambda function's logic. You must also provide the name of your function. Note that because this tool is |
3,124 | of your function. Note that because this tool is effectively just a wrapper around the boto3 library, you will need to run aws configure in order to make use of the tool. For more detail, see herefrom langchain.llms import OpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypellm = OpenAI(temperature=0)tools = load_tools( ["awslambda"], awslambda_tool_name="email-sender", awslambda_tool_description="sends an email with the specified content to [email protected]", function_name="testFunction1",)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("Send an email to [email protected] saying hello world.")PreviousArXivNextShell (bash)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. | Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications. ->: of your function. Note that because this tool is effectively just a wrapper around the boto3 library, you will need to run aws configure in order to make use of the tool. For more detail, see herefrom langchain.llms import OpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypellm = OpenAI(temperature=0)tools = load_tools( ["awslambda"], awslambda_tool_name="email-sender", awslambda_tool_description="sends an email with the specified content to [email protected]", function_name="testFunction1",)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("Send an email to [email protected] saying hello world.")PreviousArXivNextShell (bash)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,125 | Google Search | ü¶úÔ∏èüîó Langchain | This notebook goes over how to use the google search component. | This notebook goes over how to use the google search component. ->: Google Search | ü¶úÔ∏èüîó Langchain |
3,126 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsGoogle SearchOn this pageGoogle SearchThis notebook goes over how to use the google search component.First, you need to set up the proper API keys and environment variables. To set it up, create the GOOGLE_API_KEY in the Google Cloud credential console (https://console.cloud.google.com/apis/credentials) and a GOOGLE_CSE_ID using the Programmable Search Engine (https://programmablesearchengine.google.com/controlpanel/create). Next, it is good to follow the instructions found here.Then we will need to set some environment variables.import osos.environ["GOOGLE_CSE_ID"] = ""os.environ["GOOGLE_API_KEY"] = ""from langchain.tools import Toolfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tool = Tool( name="Google Search", description="Search Google for recent results.", func=search.run,)tool.run("Obama's first name?") "STATE OF HAWAII. 1 Child's First Name. (Type or print). 2. Sex. BARACK. 3. This Birth. CERTIFICATE OF LIVE BIRTH. FILE. NUMBER 151 le. lb. Middle Name. Barack Hussein Obama II is an American former politician who served as the 44th president of the | This notebook goes over how to use the google search component. | This notebook goes over how to use the google search component. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsGoogle SearchOn this pageGoogle SearchThis notebook goes over how to use the google search component.First, you need to set up the proper API keys and environment variables. To set it up, create the GOOGLE_API_KEY in the Google Cloud credential console (https://console.cloud.google.com/apis/credentials) and a GOOGLE_CSE_ID using the Programmable Search Engine (https://programmablesearchengine.google.com/controlpanel/create). Next, it is good to follow the instructions found here.Then we will need to set some environment variables.import osos.environ["GOOGLE_CSE_ID"] = ""os.environ["GOOGLE_API_KEY"] = ""from langchain.tools import Toolfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tool = Tool( name="Google Search", description="Search Google for recent results.", func=search.run,)tool.run("Obama's first name?") "STATE OF HAWAII. 1 Child's First Name. (Type or print). 2. Sex. BARACK. 3. This Birth. CERTIFICATE OF LIVE BIRTH. FILE. NUMBER 151 le. lb. Middle Name. Barack Hussein Obama II is an American former politician who served as the 44th president of the |
3,127 | who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic\xa0... When Barack Obama was elected president in 2008, he became the first African American to hold ... The Middle East remained a key foreign policy challenge. Jan 19, 2017 ... Jordan Barack Treasure, New York City, born in 2008 ... Jordan Barack Treasure made national news when he was the focus of a New York newspaper\xa0... Portrait of George Washington, the 1st President of the United States ... Portrait of Barack Obama, the 44th President of the United States\xa0... His full name is Barack Hussein Obama II. Since the “II” is simply because he was named for his father, his last name is Obama. Mar 22, 2008 ... Barry Obama decided that he didn't like his nickname. A few of his friends at Occidental College had already begun to call him Barack (his\xa0... Aug 18, 2017 ... It took him several seconds and multiple clues to remember former President Barack Obama's first name. Miller knew that every answer had to\xa0... Feb 9, 2015 ... Michael Jordan misspelled Barack Obama's first name on 50th-birthday gift ... Knowing Obama is a Chicagoan and huge basketball fan,\xa0... 4 days ago ... Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and\xa0..."Number of Results​You can use the k parameter to set the number of resultssearch = GoogleSearchAPIWrapper(k=1)tool = Tool( name="I'm Feeling Lucky", description="Search Google and return the first result.", func=search.run,)tool.run("python") 'The official home of the Python Programming Language.''The official home of the Python Programming Language.'Metadata Results​Run query through GoogleSearch and return snippet, title, and link metadata.Snippet: The description of the result.Title: The title of the result.Link: The link to the result.search = GoogleSearchAPIWrapper()def top5_results(query): return | This notebook goes over how to use the google search component. | This notebook goes over how to use the google search component. ->: who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic\xa0... When Barack Obama was elected president in 2008, he became the first African American to hold ... The Middle East remained a key foreign policy challenge. Jan 19, 2017 ... Jordan Barack Treasure, New York City, born in 2008 ... Jordan Barack Treasure made national news when he was the focus of a New York newspaper\xa0... Portrait of George Washington, the 1st President of the United States ... Portrait of Barack Obama, the 44th President of the United States\xa0... His full name is Barack Hussein Obama II. Since the “II” is simply because he was named for his father, his last name is Obama. Mar 22, 2008 ... Barry Obama decided that he didn't like his nickname. A few of his friends at Occidental College had already begun to call him Barack (his\xa0... Aug 18, 2017 ... It took him several seconds and multiple clues to remember former President Barack Obama's first name. Miller knew that every answer had to\xa0... Feb 9, 2015 ... Michael Jordan misspelled Barack Obama's first name on 50th-birthday gift ... Knowing Obama is a Chicagoan and huge basketball fan,\xa0... 4 days ago ... Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and\xa0..."Number of Results​You can use the k parameter to set the number of resultssearch = GoogleSearchAPIWrapper(k=1)tool = Tool( name="I'm Feeling Lucky", description="Search Google and return the first result.", func=search.run,)tool.run("python") 'The official home of the Python Programming Language.''The official home of the Python Programming Language.'Metadata Results​Run query through GoogleSearch and return snippet, title, and link metadata.Snippet: The description of the result.Title: The title of the result.Link: The link to the result.search = GoogleSearchAPIWrapper()def top5_results(query): return |
3,128 | top5_results(query): return search.results(query, 5)tool = Tool( name="Google Search Snippets", description="Search Google for recent results.", func=top5_results,)PreviousGoogle PlacesNextGoogle SerperNumber of ResultsMetadata ResultsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook goes over how to use the google search component. | This notebook goes over how to use the google search component. ->: top5_results(query): return search.results(query, 5)tool = Tool( name="Google Search Snippets", description="Search Google for recent results.", func=top5_results,)PreviousGoogle PlacesNextGoogle SerperNumber of ResultsMetadata ResultsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,129 | Metaphor Search | ü¶úÔ∏èüîó Langchain | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Metaphor Search | ü¶úÔ∏èüîó Langchain |
3,130 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsMetaphor SearchOn this pageMetaphor SearchMetaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page.This notebook goes over how to use Metaphor search.First, you need to set up the proper API keys and environment variables. Get 1000 free searches/month here.Then enter your API key as an environment variable.import osos.environ["METAPHOR_API_KEY"] = "..."Using their SDK‚ÄãThis is the newer and more supported way to use the Metaphor API - via their SDK# !pip install metaphor-pythonfrom metaphor_python import Metaphorclient = Metaphor(api_key=os.environ["METAPHOR_API_KEY"])from langchain.agents import toolfrom typing import List@tooldef search(query: str): """Call search engine with a query.""" return client.search(query, use_autoprompt=True, num_results=5)@tooldef get_contents(ids: List[str]): """Get contents of a webpage. The ids passed in should be a list of ids as fetched from `search`. """ return client.get_contents(ids)@tooldef find_similar(url: str): """Get search results similar to a given URL. The url passed in | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsMetaphor SearchOn this pageMetaphor SearchMetaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page.This notebook goes over how to use Metaphor search.First, you need to set up the proper API keys and environment variables. Get 1000 free searches/month here.Then enter your API key as an environment variable.import osos.environ["METAPHOR_API_KEY"] = "..."Using their SDK‚ÄãThis is the newer and more supported way to use the Metaphor API - via their SDK# !pip install metaphor-pythonfrom metaphor_python import Metaphorclient = Metaphor(api_key=os.environ["METAPHOR_API_KEY"])from langchain.agents import toolfrom typing import List@tooldef search(query: str): """Call search engine with a query.""" return client.search(query, use_autoprompt=True, num_results=5)@tooldef get_contents(ids: List[str]): """Get contents of a webpage. The ids passed in should be a list of ids as fetched from `search`. """ return client.get_contents(ids)@tooldef find_similar(url: str): """Get search results similar to a given URL. The url passed in |
3,131 | similar to a given URL. The url passed in should be a URL returned from `search` """ return client.find_similar(url, num_results=5)tools = [search, get_contents, find_similar]Use in an agent‚Äãfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)from langchain.agents import OpenAIFunctionsAgentfrom langchain.schema import SystemMessagesystem_message = SystemMessage(content="You are a web researcher who uses search engines to look up information.")prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.run("Find the hottest AI agent startups and what they do") > Entering new AgentExecutor chain... Invoking: `search` with `{'query': 'hottest AI agent startups'}` SearchResponse(results=[Result(title='A Search Engine for Machine Intelligence', url='https://bellow.ai/', id='bdYc6hvHww_JvLv9k8NhPA', score=0.19460266828536987, published_date='2023-01-01', author=None, extract=None), Result(title='Adept: Useful General Intelligence', url='https://www.adept.ai/', id='aNBppxBZvQRZMov6sFVj9g', score=0.19103890657424927, published_date='2000-01-01', author=None, extract=None), Result(title='HiOperator | Generative AI-Enhanced Customer Service', url='https://www.hioperator.com/', id='jieb6sB53mId3EDo0z-SDw', score=0.18549954891204834, published_date='2000-01-01', author=None, extract=None), Result(title='Home - Stylo', url='https://www.askstylo.com/', id='kUiCuCjJYMD4N0NXdCtqlQ', score=0.1837376356124878, published_date='2000-01-01', author=None, extract=None), Result(title='DirectAI', url='https://directai.io/?utm_source=twitter&utm_medium=raw_message&utm_campaign=first_launch', id='45iSS8KnJ9tL1ilPg3dL9A', score=0.1835256814956665, published_date='2023-01-01', author=None, extract=None), | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: similar to a given URL. The url passed in should be a URL returned from `search` """ return client.find_similar(url, num_results=5)tools = [search, get_contents, find_similar]Use in an agent‚Äãfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)from langchain.agents import OpenAIFunctionsAgentfrom langchain.schema import SystemMessagesystem_message = SystemMessage(content="You are a web researcher who uses search engines to look up information.")prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)from langchain.agents import AgentExecutoragent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.run("Find the hottest AI agent startups and what they do") > Entering new AgentExecutor chain... Invoking: `search` with `{'query': 'hottest AI agent startups'}` SearchResponse(results=[Result(title='A Search Engine for Machine Intelligence', url='https://bellow.ai/', id='bdYc6hvHww_JvLv9k8NhPA', score=0.19460266828536987, published_date='2023-01-01', author=None, extract=None), Result(title='Adept: Useful General Intelligence', url='https://www.adept.ai/', id='aNBppxBZvQRZMov6sFVj9g', score=0.19103890657424927, published_date='2000-01-01', author=None, extract=None), Result(title='HiOperator | Generative AI-Enhanced Customer Service', url='https://www.hioperator.com/', id='jieb6sB53mId3EDo0z-SDw', score=0.18549954891204834, published_date='2000-01-01', author=None, extract=None), Result(title='Home - Stylo', url='https://www.askstylo.com/', id='kUiCuCjJYMD4N0NXdCtqlQ', score=0.1837376356124878, published_date='2000-01-01', author=None, extract=None), Result(title='DirectAI', url='https://directai.io/?utm_source=twitter&utm_medium=raw_message&utm_campaign=first_launch', id='45iSS8KnJ9tL1ilPg3dL9A', score=0.1835256814956665, published_date='2023-01-01', author=None, extract=None), |
3,132 | author=None, extract=None), Result(title='Sidekick AI | Customer Service Automated', url='https://www.sidekickai.co/', id='nCoPMUtqWQqhUvsdTjJT6A', score=0.18215584754943848, published_date='2020-01-01', author=None, extract=None), Result(title='Hebbia - Search, Reinvented', url='https://www.hebbia.ai/', id='Zy0YaekZdd4rurPQKkys7A', score=0.1799020767211914, published_date='2023-01-01', author=None, extract=None), Result(title='AI.XYZ', url='https://www.ai.xyz/', id='A5c1ePEvsaQeml2Kui_-vA', score=0.1797989457845688, published_date='2023-01-01', author=None, extract=None), Result(title='Halist AI', url='https://halist.ai/', id='-lKPLSb4N4dgMZlTgoDvJg', score=0.17975398898124695, published_date='2023-03-01', author=None, extract=None), Result(title='Clone your best expert', url='https://airin.ai/', id='_XIjx1YLPfI4cKePIEc_bQ', score=0.17957791686058044, published_date='2016-02-12', author=None, extract=None)], api=<metaphor_python.api.Metaphor object at 0x104192140>) Invoking: `get_contents` with `{'ids': ['bdYc6hvHww_JvLv9k8NhPA', 'aNBppxBZvQRZMov6sFVj9g', 'jieb6sB53mId3EDo0z-SDw', 'kUiCuCjJYMD4N0NXdCtqlQ', '45iSS8KnJ9tL1ilPg3dL9A', 'nCoPMUtqWQqhUvsdTjJT6A', 'Zy0YaekZdd4rurPQKkys7A', 'A5c1ePEvsaQeml2Kui_-vA', '-lKPLSb4N4dgMZlTgoDvJg', '_XIjx1YLPfI4cKePIEc_bQ']}` GetContentsResponse(contents=[DocumentContent(id='bdYc6hvHww_JvLv9k8NhPA', url='https://bellow.ai/', title='A Search Engine for Machine Intelligence', extract="<div><div><h2>More Opinions</h2><p>Get responses from multiple AIs</p><p>Don't rely on a single source of truth, explore the full space of machine intelligence and get highly tailored results.</p></div></div>"), DocumentContent(id='aNBppxBZvQRZMov6sFVj9g', url='https://www.adept.ai/', title='Adept: Useful General Intelligence', extract='<div><div><p>Useful <br />General <br />Intelligence</p></div>'), DocumentContent(id='jieb6sB53mId3EDo0z-SDw', url='https://www.hioperator.com/', title='HiOperator | Generative AI-Enhanced Customer | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: author=None, extract=None), Result(title='Sidekick AI | Customer Service Automated', url='https://www.sidekickai.co/', id='nCoPMUtqWQqhUvsdTjJT6A', score=0.18215584754943848, published_date='2020-01-01', author=None, extract=None), Result(title='Hebbia - Search, Reinvented', url='https://www.hebbia.ai/', id='Zy0YaekZdd4rurPQKkys7A', score=0.1799020767211914, published_date='2023-01-01', author=None, extract=None), Result(title='AI.XYZ', url='https://www.ai.xyz/', id='A5c1ePEvsaQeml2Kui_-vA', score=0.1797989457845688, published_date='2023-01-01', author=None, extract=None), Result(title='Halist AI', url='https://halist.ai/', id='-lKPLSb4N4dgMZlTgoDvJg', score=0.17975398898124695, published_date='2023-03-01', author=None, extract=None), Result(title='Clone your best expert', url='https://airin.ai/', id='_XIjx1YLPfI4cKePIEc_bQ', score=0.17957791686058044, published_date='2016-02-12', author=None, extract=None)], api=<metaphor_python.api.Metaphor object at 0x104192140>) Invoking: `get_contents` with `{'ids': ['bdYc6hvHww_JvLv9k8NhPA', 'aNBppxBZvQRZMov6sFVj9g', 'jieb6sB53mId3EDo0z-SDw', 'kUiCuCjJYMD4N0NXdCtqlQ', '45iSS8KnJ9tL1ilPg3dL9A', 'nCoPMUtqWQqhUvsdTjJT6A', 'Zy0YaekZdd4rurPQKkys7A', 'A5c1ePEvsaQeml2Kui_-vA', '-lKPLSb4N4dgMZlTgoDvJg', '_XIjx1YLPfI4cKePIEc_bQ']}` GetContentsResponse(contents=[DocumentContent(id='bdYc6hvHww_JvLv9k8NhPA', url='https://bellow.ai/', title='A Search Engine for Machine Intelligence', extract="<div><div><h2>More Opinions</h2><p>Get responses from multiple AIs</p><p>Don't rely on a single source of truth, explore the full space of machine intelligence and get highly tailored results.</p></div></div>"), DocumentContent(id='aNBppxBZvQRZMov6sFVj9g', url='https://www.adept.ai/', title='Adept: Useful General Intelligence', extract='<div><div><p>Useful <br />General <br />Intelligence</p></div>'), DocumentContent(id='jieb6sB53mId3EDo0z-SDw', url='https://www.hioperator.com/', title='HiOperator | Generative AI-Enhanced Customer |
3,133 | | Generative AI-Enhanced Customer Service', extract="<div><div><div><div><div><h2>Generative AI-Enhanced Customer Support Automation</h2><p>Flexible, Scalable Customer Support</p></div><div><p></p></div></div><p></p></div><div><div><p>Why HiOperator?</p><h2>Truly scalable customer service</h2><p>A digital-first customer service provider that changes all the rules of what's possible. Scalable. 100% US-Based. Effortless. HiOperator is the digital payoff.</p></div><p></p></div><div><div><p>Next-Gen Customer Service</p><h2>Scaling with HiOperator's Superagents</h2><p>HiOperator is only possible in the digital era. Our revolutionary software connects with your systems to empower our agents to learn quickly and deliver incredible accuracy. </p></div><div><div><p></p><div><h3>Train Us Once</h3><p>We handle all of the recruiting, hiring, and training moving forward. Never have to deal with another classroom retraining or head count headaches.</p></div></div><div><div><h3>Send Us Tickets</h3><p>We pull tickets automatically from your preferred CRM vendor into our custom system. You have full control over <strong>how</strong> and <strong>when</strong> we get tickets.</p></div><p></p></div><div><p></p><div><h3>Pay per resolution</h3><p>We charge for each conversation we solve. No onboarding fees. No hourly rates. Pay for what you use.</p></div></div></div></div><div><p>Customer Experience</p><h2>Insights &Â\xa0News</h2></div><div><div><h2>Let's transform your customer service.</h2><p>We can onboard in a matter of days and we offer highly flexible contracts. Whether you need a large team to handle your support or some overflow assistance, getting started is easy.</p></div><p>We can onboard in a matter of days and we offer highly flexible contracts. Whether you need a large team to handle your support or some overflow assistance, getting started is easy.</p></div></div>"), DocumentContent(id='kUiCuCjJYMD4N0NXdCtqlQ', url='https://www.askstylo.com/', title='Home - Stylo', | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: | Generative AI-Enhanced Customer Service', extract="<div><div><div><div><div><h2>Generative AI-Enhanced Customer Support Automation</h2><p>Flexible, Scalable Customer Support</p></div><div><p></p></div></div><p></p></div><div><div><p>Why HiOperator?</p><h2>Truly scalable customer service</h2><p>A digital-first customer service provider that changes all the rules of what's possible. Scalable. 100% US-Based. Effortless. HiOperator is the digital payoff.</p></div><p></p></div><div><div><p>Next-Gen Customer Service</p><h2>Scaling with HiOperator's Superagents</h2><p>HiOperator is only possible in the digital era. Our revolutionary software connects with your systems to empower our agents to learn quickly and deliver incredible accuracy. </p></div><div><div><p></p><div><h3>Train Us Once</h3><p>We handle all of the recruiting, hiring, and training moving forward. Never have to deal with another classroom retraining or head count headaches.</p></div></div><div><div><h3>Send Us Tickets</h3><p>We pull tickets automatically from your preferred CRM vendor into our custom system. You have full control over <strong>how</strong> and <strong>when</strong> we get tickets.</p></div><p></p></div><div><p></p><div><h3>Pay per resolution</h3><p>We charge for each conversation we solve. No onboarding fees. No hourly rates. Pay for what you use.</p></div></div></div></div><div><p>Customer Experience</p><h2>Insights &Â\xa0News</h2></div><div><div><h2>Let's transform your customer service.</h2><p>We can onboard in a matter of days and we offer highly flexible contracts. Whether you need a large team to handle your support or some overflow assistance, getting started is easy.</p></div><p>We can onboard in a matter of days and we offer highly flexible contracts. Whether you need a large team to handle your support or some overflow assistance, getting started is easy.</p></div></div>"), DocumentContent(id='kUiCuCjJYMD4N0NXdCtqlQ', url='https://www.askstylo.com/', title='Home - Stylo', |
3,134 | title='Home - Stylo', extract='<div><div><header><div><p></p><h2>Stop angry customers from breaking support</h2><p></p></div></header><div><p></p><h2><em> </em><strong><em>â\x80\x9cWe solve 99 tickets perfectly </em>ð\x9f\x98\x87<em> but the 1 we miss lands in the CEOâ\x80\x99s inbox </em>ð\x9f\x98«<em>â\x80\x9d<br /></em></strong></h2><p></p><div><p><strong>â\x80\x8d</strong>That 1 costly ticket breaks your process, metrics, and the will of your team. Angry customers make support teams less effective, which makes customers angrier in return.<strong><br />â\x80\x8d</strong><br />Stylo is AI that tells you where to most effectively spend your time to improve the customer experience. This leads to happier customers, employees, and reduces churn.</p><p>â\x80\x8d<strong>No setup, no learning curve, just plug it in and go.</strong></p></div></div><div><div><p></p><div><p>â\x80\x9cIâ\x80\x99m able to better manage the team because I can pinpoint gaps in the teamâ\x80\x99s knowledge or training, and find room for process improvements.â\x80\x9d</p><p></p></div></div></div></div>'), DocumentContent(id='45iSS8KnJ9tL1ilPg3dL9A', url='https://directai.io/?utm_source=twitter&utm_medium=raw_message&utm_campaign=first_launch', title='DirectAI', extract="<div><div><div><h2>Vision models without training data.<br /></h2><p>Build and deploy powerful computer vision models with plain language.<br />No code or training required.</p></div><div><h2>Fundamentally different.</h2><p>We use large language models and zero-shot learning to instantly build models that fit your description.</p><br /></div><div><div><p></p><h2>We're removing the last major barrier to creating custom models - <br />training data.</h2><p></p></div><div><table><colgroup></colgroup><thead><tr><th><p>Deploy and iterate in seconds with DirectAI</p></th></tr></thead><tbody><tr><td>• Don't spend time assembling training data.</td></tr><tr><td>• Don't pay a third party to label your | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: title='Home - Stylo', extract='<div><div><header><div><p></p><h2>Stop angry customers from breaking support</h2><p></p></div></header><div><p></p><h2><em> </em><strong><em>â\x80\x9cWe solve 99 tickets perfectly </em>ð\x9f\x98\x87<em> but the 1 we miss lands in the CEOâ\x80\x99s inbox </em>ð\x9f\x98«<em>â\x80\x9d<br /></em></strong></h2><p></p><div><p><strong>â\x80\x8d</strong>That 1 costly ticket breaks your process, metrics, and the will of your team. Angry customers make support teams less effective, which makes customers angrier in return.<strong><br />â\x80\x8d</strong><br />Stylo is AI that tells you where to most effectively spend your time to improve the customer experience. This leads to happier customers, employees, and reduces churn.</p><p>â\x80\x8d<strong>No setup, no learning curve, just plug it in and go.</strong></p></div></div><div><div><p></p><div><p>â\x80\x9cIâ\x80\x99m able to better manage the team because I can pinpoint gaps in the teamâ\x80\x99s knowledge or training, and find room for process improvements.â\x80\x9d</p><p></p></div></div></div></div>'), DocumentContent(id='45iSS8KnJ9tL1ilPg3dL9A', url='https://directai.io/?utm_source=twitter&utm_medium=raw_message&utm_campaign=first_launch', title='DirectAI', extract="<div><div><div><h2>Vision models without training data.<br /></h2><p>Build and deploy powerful computer vision models with plain language.<br />No code or training required.</p></div><div><h2>Fundamentally different.</h2><p>We use large language models and zero-shot learning to instantly build models that fit your description.</p><br /></div><div><div><p></p><h2>We're removing the last major barrier to creating custom models - <br />training data.</h2><p></p></div><div><table><colgroup></colgroup><thead><tr><th><p>Deploy and iterate in seconds with DirectAI</p></th></tr></thead><tbody><tr><td>• Don't spend time assembling training data.</td></tr><tr><td>• Don't pay a third party to label your |
3,135 | Don't pay a third party to label your data.</td></tr><tr><td>• Don't pay to train your model.</td></tr><tr><td>• Don't spend months finetuning your model's behavior.</td></tr></tbody></table></div></div><div><h2>Venture-backed.<p>Based in NYC.</p><p>We're changing how people use AI in the real world.</p><p>Come talk to us on .</p></h2></div></div></div>"), DocumentContent(id='nCoPMUtqWQqhUvsdTjJT6A', url='https://www.sidekickai.co/', title='Sidekick AI | Customer Service Automated', extract='<div><div><div><div><div><div><div><p>Hi, I am an AI named Jenny, working at Pizza Planet. How can I help you today?</p></div><div><p>How much are large pizzas with 1 topping?</p></div><div><p>For most toppings, a large with one topping would be $10.99.</p></div><div><p>Ok, can I order a large with pepperoni</p></div><div><p>Sure! Takeout or delivery?</p></div><div><p>Alright, order placed. See you at 5 pm!</p></div></div><div><p></p></div></div><p></p></div><div><p>Meet Sidekick</p><div><p>\n Sidekick is an AI agent built to hold natural and dynamic conversations with your customers and talk just like a human.</p><p>Built on the world\'s most advanced AI models, Sidekick pushes the state of the art in natural conversation and converses seamlessly with your customers.\n </p></div><p>Try it out ➜</p><p>Try it out ↓</p></div><div><p>An AI agent designed for <strong>service-led growth.</strong></p><div><div><p></p><p>Personal</p><p>Every customer is different, and has unique needs. Our agents are built to provide personalized service depending on the customer\'s needs.</p></div><div><p></p><p>Fast</p><p>Unlike humans, our Sidekicks respond near-instantly, any time of the day. Your customers won\'t wait for service ever again.</p></div><div><p></p><p>Effective</p><p>Customers love great service, and Sidekick delivers. Grow revenue by solving issues in minutes instead of hours, and providing personalized support to each customer.</p></div></div></div><div><p>Integrating with | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Don't pay a third party to label your data.</td></tr><tr><td>• Don't pay to train your model.</td></tr><tr><td>• Don't spend months finetuning your model's behavior.</td></tr></tbody></table></div></div><div><h2>Venture-backed.<p>Based in NYC.</p><p>We're changing how people use AI in the real world.</p><p>Come talk to us on .</p></h2></div></div></div>"), DocumentContent(id='nCoPMUtqWQqhUvsdTjJT6A', url='https://www.sidekickai.co/', title='Sidekick AI | Customer Service Automated', extract='<div><div><div><div><div><div><div><p>Hi, I am an AI named Jenny, working at Pizza Planet. How can I help you today?</p></div><div><p>How much are large pizzas with 1 topping?</p></div><div><p>For most toppings, a large with one topping would be $10.99.</p></div><div><p>Ok, can I order a large with pepperoni</p></div><div><p>Sure! Takeout or delivery?</p></div><div><p>Alright, order placed. See you at 5 pm!</p></div></div><div><p></p></div></div><p></p></div><div><p>Meet Sidekick</p><div><p>\n Sidekick is an AI agent built to hold natural and dynamic conversations with your customers and talk just like a human.</p><p>Built on the world\'s most advanced AI models, Sidekick pushes the state of the art in natural conversation and converses seamlessly with your customers.\n </p></div><p>Try it out ➜</p><p>Try it out ↓</p></div><div><p>An AI agent designed for <strong>service-led growth.</strong></p><div><div><p></p><p>Personal</p><p>Every customer is different, and has unique needs. Our agents are built to provide personalized service depending on the customer\'s needs.</p></div><div><p></p><p>Fast</p><p>Unlike humans, our Sidekicks respond near-instantly, any time of the day. Your customers won\'t wait for service ever again.</p></div><div><p></p><p>Effective</p><p>Customers love great service, and Sidekick delivers. Grow revenue by solving issues in minutes instead of hours, and providing personalized support to each customer.</p></div></div></div><div><p>Integrating with |
3,136 | with <strong>your tools.</strong></p></div><div><p><strong>Wherever </strong>your customers are.</p><p>\n Sidekick takes an omnichannel approach to customer service, aggregating all customer interactions across all platforms in one area. Currently most social media platforms are supported, along with website embeddings and API integration.\n </p><div><div><div><p>On the web.</p><div><p>Sidekick makes adding a live chat to your website as simple as copy and pasting a single line of code.</p><p>Chat bubbles discretely sit in the bottom right corner and provide a smooth conversation experience, with AI and human agents alike.</p></div></div><p></p><p></p></div><div><div><p>On Facebook.</p><div><p>Sidekick integrates with your Facebook pages to make live customer service one click away.</p><p>Customers can reach your agent and get service without ever leaving Messenger.</p></div></div><p></p><p></p></div><div><div><p>On Instagram.</p><div><p>E-Commerce on Instagram is especially demanding for customer service.</p><p>Sidekick integrates easily with Instagram accounts to put a live agent one click away.</p></div></div><p></p><p></p></div><div><div><p>On Twitter.</p><div><p>Customers are spending more time on Twitter, which means businesses should provide customer service right on the platform.</p><p>Sidekick integrates easily with Twitter accounts to put a live agent one click away.</p></div></div><p></p><p></p></div><div><div><p>Anywhere you want.</p><div><p>Our API provides programmatic access to your Sidekick agent to integrate into your own app.</p><p>We\'ve built simple abstractions over the chat interface to make it easy to work with our API.</p></div></div><div><div><p>Endpoints</p><div><p>POST</p><p>https://www.api.sidekickai.co/converse</p></div></div><div><p>Sample Request</p><div><pre>{\n "access_token": "KjZUZBWAOKwgLWAlVFyL",\n "conversation_id": "23874",\n "body": "How much is a large 2 topping?"\n}</pre></div></div><div><p>Sample Response</p><div><pre>{\n | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: with <strong>your tools.</strong></p></div><div><p><strong>Wherever </strong>your customers are.</p><p>\n Sidekick takes an omnichannel approach to customer service, aggregating all customer interactions across all platforms in one area. Currently most social media platforms are supported, along with website embeddings and API integration.\n </p><div><div><div><p>On the web.</p><div><p>Sidekick makes adding a live chat to your website as simple as copy and pasting a single line of code.</p><p>Chat bubbles discretely sit in the bottom right corner and provide a smooth conversation experience, with AI and human agents alike.</p></div></div><p></p><p></p></div><div><div><p>On Facebook.</p><div><p>Sidekick integrates with your Facebook pages to make live customer service one click away.</p><p>Customers can reach your agent and get service without ever leaving Messenger.</p></div></div><p></p><p></p></div><div><div><p>On Instagram.</p><div><p>E-Commerce on Instagram is especially demanding for customer service.</p><p>Sidekick integrates easily with Instagram accounts to put a live agent one click away.</p></div></div><p></p><p></p></div><div><div><p>On Twitter.</p><div><p>Customers are spending more time on Twitter, which means businesses should provide customer service right on the platform.</p><p>Sidekick integrates easily with Twitter accounts to put a live agent one click away.</p></div></div><p></p><p></p></div><div><div><p>Anywhere you want.</p><div><p>Our API provides programmatic access to your Sidekick agent to integrate into your own app.</p><p>We\'ve built simple abstractions over the chat interface to make it easy to work with our API.</p></div></div><div><div><p>Endpoints</p><div><p>POST</p><p>https://www.api.sidekickai.co/converse</p></div></div><div><p>Sample Request</p><div><pre>{\n "access_token": "KjZUZBWAOKwgLWAlVFyL",\n "conversation_id": "23874",\n "body": "How much is a large 2 topping?"\n}</pre></div></div><div><p>Sample Response</p><div><pre>{\n |
3,137 | Response</p><div><pre>{\n "response": "A large'), DocumentContent(id='Zy0YaekZdd4rurPQKkys7A', url='https://www.hebbia.ai/', title='Hebbia - Search, Reinvented', extract="<div><div><h2>Direct to the point <br />with cutting-edge AI.</h2><p>Stop relying on archaic software, traditional Q&A emails, or waiting for deal partners. Get answers on your own time with accuracy that you can't replicate with humans. <br />â\x80\x8d<br /></p><p>HebbiaÂ\xa0retrieves <strong>every</strong> answer, even insights humans overlook. <br /></p></div>"), DocumentContent(id='A5c1ePEvsaQeml2Kui_-vA', url='https://www.ai.xyz/', title='AI.XYZ', extract='<div><div>\n \n \n<article>\n \n \n \n \n \n<div><div>\n<p><h2><strong>Go be human</strong></h2></p>\n</div><div><p>\n</p><h4>Let your AI deal with the rest</h4>\n<p></p></div><div><p>Design your own AI with AI.XYZ</p></div><div>\n \n \n \n <p></p>\n \n </div></div>\n \n \n \n \n<div><p>\n</p><h3><strong>The digital world was designed to make us more productive but now navigating it all has become its own job.</strong></h3>\n<p></p></div>\n \n \n \n \n<section>\n <div>\n \n \n \n \n \n \n \n \n <p></p>\n \n \n </div>\n <div><div><p>\n</p><h2><strong>Take life a little easier</strong></h2>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Tackles info<br />overload</strong></h2>\n<p></p></div><div><p>\n</p><h4>“Like ChatGPT, but way more proactive and useful because it’s designed by me, for only me”</h4>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Never sits<br />around</strong></h2>\n<p></p></div><div><p>\n</p><h4>“Even if I’m not interacting with it, my AI looks for ways to simplify my day, surprising me with useful ideas”</h4>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Supports and<br />inspires</strong></h2>\n<p></p></div><div><p>\n</p><h4>“It takes things off my plate, but also cheers me on throughout the day — helping me | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Response</p><div><pre>{\n "response": "A large'), DocumentContent(id='Zy0YaekZdd4rurPQKkys7A', url='https://www.hebbia.ai/', title='Hebbia - Search, Reinvented', extract="<div><div><h2>Direct to the point <br />with cutting-edge AI.</h2><p>Stop relying on archaic software, traditional Q&A emails, or waiting for deal partners. Get answers on your own time with accuracy that you can't replicate with humans. <br />â\x80\x8d<br /></p><p>HebbiaÂ\xa0retrieves <strong>every</strong> answer, even insights humans overlook. <br /></p></div>"), DocumentContent(id='A5c1ePEvsaQeml2Kui_-vA', url='https://www.ai.xyz/', title='AI.XYZ', extract='<div><div>\n \n \n<article>\n \n \n \n \n \n<div><div>\n<p><h2><strong>Go be human</strong></h2></p>\n</div><div><p>\n</p><h4>Let your AI deal with the rest</h4>\n<p></p></div><div><p>Design your own AI with AI.XYZ</p></div><div>\n \n \n \n <p></p>\n \n </div></div>\n \n \n \n \n<div><p>\n</p><h3><strong>The digital world was designed to make us more productive but now navigating it all has become its own job.</strong></h3>\n<p></p></div>\n \n \n \n \n<section>\n <div>\n \n \n \n \n \n \n \n \n <p></p>\n \n \n </div>\n <div><div><p>\n</p><h2><strong>Take life a little easier</strong></h2>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Tackles info<br />overload</strong></h2>\n<p></p></div><div><p>\n</p><h4>“Like ChatGPT, but way more proactive and useful because it’s designed by me, for only me”</h4>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Never sits<br />around</strong></h2>\n<p></p></div><div><p>\n</p><h4>“Even if I’m not interacting with it, my AI looks for ways to simplify my day, surprising me with useful ideas”</h4>\n<p></p></div><div>\n \n \n \n <p></p>\n \n </div><div><p>\n</p><h2><strong>Supports and<br />inspires</strong></h2>\n<p></p></div><div><p>\n</p><h4>“It takes things off my plate, but also cheers me on throughout the day — helping me |
3,138 | cheers me on throughout the day — helping me navigate it all”</h4>\n<p></p></div></div>\n \n \n</section>\n \n \n \n \n<div><div><p>\n</p><h2><strong>Create your AI in 3 simple steps:</strong></h2>\n<p></p></div><div>\n<p><strong>STEP ONE</strong></p><h2><strong>Pick a face and voice</strong></h2><h4>Choose from our library of characters or add your own unique face and voice.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div><div>\n<p><strong>STEP TWO</strong></p><h2><strong>Create your AI’s persona and memory</strong></h2><h4>Decide who your AI is, its purpose and what it will help you with. Paste information that you want your AI to know.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div><div>\n<p><strong>STEP THREE</strong></p><h2><strong>Get started</strong></h2><h4>Ask your AI to help you with ideas and support throughout your day. Eventually it will be able to proactively support you.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div></div>\n \n \n \n \n<section>\n <div>\n \n \n \n \n \n \n \n \n <p></p>\n \n \n </div>\n <div><p>\n</p><h2><strong>Start training your AI to do things for you</strong></h2>\n<p></p></div>\n \n \n</section>\n \n</article>\n \n \n \n \n \n </div></div'), DocumentContent(id='-lKPLSb4N4dgMZlTgoDvJg', url='https://halist.ai/', title='Halist AI', extract='<div><div>\n<p><a href="/app/">Start for free</a></p><p>\nPowered by OpenAI GPT-3 and GPT-4.\n</p>\n<h2>ChatGPT. Lightning-fast and private. Everywhere.</h2>\n<h2>Optimized access to the AI on mobile.</h2>\n<p></p><p>\nTo install Halist on <b>iPhone</b>, open the web app in Safari and tap the "Share" icon. Then, tap "Add to Home Screen" and follow the prompts.\nTo install on <b>Android</b>, open the website in Chrome and tap the three dots in the top right corner. Then, tap "Add to Home screen" and follow the prompts.\n</p>\n</div></div>'), DocumentContent(id='_XIjx1YLPfI4cKePIEc_bQ', url='https://airin.ai/', title='Clone your best expert', extract='<div><section><section><div><p> | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: cheers me on throughout the day — helping me navigate it all”</h4>\n<p></p></div></div>\n \n \n</section>\n \n \n \n \n<div><div><p>\n</p><h2><strong>Create your AI in 3 simple steps:</strong></h2>\n<p></p></div><div>\n<p><strong>STEP ONE</strong></p><h2><strong>Pick a face and voice</strong></h2><h4>Choose from our library of characters or add your own unique face and voice.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div><div>\n<p><strong>STEP TWO</strong></p><h2><strong>Create your AI’s persona and memory</strong></h2><h4>Decide who your AI is, its purpose and what it will help you with. Paste information that you want your AI to know.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div><div>\n<p><strong>STEP THREE</strong></p><h2><strong>Get started</strong></h2><h4>Ask your AI to help you with ideas and support throughout your day. Eventually it will be able to proactively support you.</h4>\n</div><div>\n \n \n \n <p></p>\n \n </div></div>\n \n \n \n \n<section>\n <div>\n \n \n \n \n \n \n \n \n <p></p>\n \n \n </div>\n <div><p>\n</p><h2><strong>Start training your AI to do things for you</strong></h2>\n<p></p></div>\n \n \n</section>\n \n</article>\n \n \n \n \n \n </div></div'), DocumentContent(id='-lKPLSb4N4dgMZlTgoDvJg', url='https://halist.ai/', title='Halist AI', extract='<div><div>\n<p><a href="/app/">Start for free</a></p><p>\nPowered by OpenAI GPT-3 and GPT-4.\n</p>\n<h2>ChatGPT. Lightning-fast and private. Everywhere.</h2>\n<h2>Optimized access to the AI on mobile.</h2>\n<p></p><p>\nTo install Halist on <b>iPhone</b>, open the web app in Safari and tap the "Share" icon. Then, tap "Add to Home Screen" and follow the prompts.\nTo install on <b>Android</b>, open the website in Chrome and tap the three dots in the top right corner. Then, tap "Add to Home screen" and follow the prompts.\n</p>\n</div></div>'), DocumentContent(id='_XIjx1YLPfI4cKePIEc_bQ', url='https://airin.ai/', title='Clone your best expert', extract='<div><section><section><div><p> |
3,139 | expert', extract='<div><section><section><div><p> Airin clones how your top expert solves problems in as little as 2 hours. Airin creates an AI companion for the rest of your team by focusing on the patterns in your expert’s questions and hypotheses, not their answers. <a href="/how-it-works">Learn how it works </a></p></div></section><section><div><p> Your customers, agents, sales teams, and consultants can independently solve a wider-range of complex problems with an AI companion. This eliminates the need to maintain large teams of specialized experts. </p></div></section><section><div><p> Airin automates remote coaching for new hires and dramatically reduces time to productivity. New employees partner with your AI companion and meet productivity standards in half the time. </p></div></section></section>')])Here are some of the hottest AI agent startups and what they do: 1. [Bellow AI](https://bellow.ai/): This startup provides a search engine for machine intelligence. It allows users to get responses from multiple AIs, exploring the full space of machine intelligence and getting highly tailored results. 2. [Adept AI](https://www.adept.ai/): Adept is focused on creating useful general intelligence. 3. [HiOperator](https://www.hioperator.com/): HiOperator offers generative AI-enhanced customer support automation. It provides scalable, digital-first customer service and uses its software to empower agents to learn quickly and deliver accurate results. 4. [Stylo](https://www.askstylo.com/): Stylo uses AI to help manage customer support, identifying where to most effectively spend time to improve the customer experience. 5. [DirectAI](https://directai.io/): DirectAI allows users to build and deploy powerful computer vision models with plain language, without the need for code or training. 6. [Sidekick AI](https://www.sidekickai.co/): Sidekick AI is built to hold natural and dynamic conversations with customers, providing | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: expert', extract='<div><section><section><div><p> Airin clones how your top expert solves problems in as little as 2 hours. Airin creates an AI companion for the rest of your team by focusing on the patterns in your expert’s questions and hypotheses, not their answers. <a href="/how-it-works">Learn how it works </a></p></div></section><section><div><p> Your customers, agents, sales teams, and consultants can independently solve a wider-range of complex problems with an AI companion. This eliminates the need to maintain large teams of specialized experts. </p></div></section><section><div><p> Airin automates remote coaching for new hires and dramatically reduces time to productivity. New employees partner with your AI companion and meet productivity standards in half the time. </p></div></section></section>')])Here are some of the hottest AI agent startups and what they do: 1. [Bellow AI](https://bellow.ai/): This startup provides a search engine for machine intelligence. It allows users to get responses from multiple AIs, exploring the full space of machine intelligence and getting highly tailored results. 2. [Adept AI](https://www.adept.ai/): Adept is focused on creating useful general intelligence. 3. [HiOperator](https://www.hioperator.com/): HiOperator offers generative AI-enhanced customer support automation. It provides scalable, digital-first customer service and uses its software to empower agents to learn quickly and deliver accurate results. 4. [Stylo](https://www.askstylo.com/): Stylo uses AI to help manage customer support, identifying where to most effectively spend time to improve the customer experience. 5. [DirectAI](https://directai.io/): DirectAI allows users to build and deploy powerful computer vision models with plain language, without the need for code or training. 6. [Sidekick AI](https://www.sidekickai.co/): Sidekick AI is built to hold natural and dynamic conversations with customers, providing |
3,140 | dynamic conversations with customers, providing personalized service depending on the customer's needs. 7. [Hebbia](https://www.hebbia.ai/): Hebbia is reinventing search with cutting-edge AI, retrieving every answer, even insights humans overlook. 8. [AI.XYZ](https://www.ai.xyz/): AI.XYZ allows users to design their own AI, tackling information overload and providing support and inspiration throughout the day. 9. [Halist AI](https://halist.ai/): Halist AI provides optimized access to ChatGPT, powered by OpenAI GPT-3 and GPT-4, on mobile. 10. [Airin](https://airin.ai/): Airin clones how your top expert solves problems in as little as 2 hours, creating an AI companion for the rest of your team. It automates remote coaching for new hires and dramatically reduces time to productivity. > Finished chain. "Here are some of the hottest AI agent startups and what they do:\n\n1. [Bellow AI](https://bellow.ai/): This startup provides a search engine for machine intelligence. It allows users to get responses from multiple AIs, exploring the full space of machine intelligence and getting highly tailored results.\n\n2. [Adept AI](https://www.adept.ai/): Adept is focused on creating useful general intelligence.\n\n3. [HiOperator](https://www.hioperator.com/): HiOperator offers generative AI-enhanced customer support automation. It provides scalable, digital-first customer service and uses its software to empower agents to learn quickly and deliver accurate results.\n\n4. [Stylo](https://www.askstylo.com/): Stylo uses AI to help manage customer support, identifying where to most effectively spend time to improve the customer experience.\n\n5. [DirectAI](https://directai.io/): DirectAI allows users to build and deploy powerful computer vision models with plain language, without the need for code or training.\n\n6. [Sidekick AI](https://www.sidekickai.co/): Sidekick AI is built to hold natural and dynamic conversations with customers, | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: dynamic conversations with customers, providing personalized service depending on the customer's needs. 7. [Hebbia](https://www.hebbia.ai/): Hebbia is reinventing search with cutting-edge AI, retrieving every answer, even insights humans overlook. 8. [AI.XYZ](https://www.ai.xyz/): AI.XYZ allows users to design their own AI, tackling information overload and providing support and inspiration throughout the day. 9. [Halist AI](https://halist.ai/): Halist AI provides optimized access to ChatGPT, powered by OpenAI GPT-3 and GPT-4, on mobile. 10. [Airin](https://airin.ai/): Airin clones how your top expert solves problems in as little as 2 hours, creating an AI companion for the rest of your team. It automates remote coaching for new hires and dramatically reduces time to productivity. > Finished chain. "Here are some of the hottest AI agent startups and what they do:\n\n1. [Bellow AI](https://bellow.ai/): This startup provides a search engine for machine intelligence. It allows users to get responses from multiple AIs, exploring the full space of machine intelligence and getting highly tailored results.\n\n2. [Adept AI](https://www.adept.ai/): Adept is focused on creating useful general intelligence.\n\n3. [HiOperator](https://www.hioperator.com/): HiOperator offers generative AI-enhanced customer support automation. It provides scalable, digital-first customer service and uses its software to empower agents to learn quickly and deliver accurate results.\n\n4. [Stylo](https://www.askstylo.com/): Stylo uses AI to help manage customer support, identifying where to most effectively spend time to improve the customer experience.\n\n5. [DirectAI](https://directai.io/): DirectAI allows users to build and deploy powerful computer vision models with plain language, without the need for code or training.\n\n6. [Sidekick AI](https://www.sidekickai.co/): Sidekick AI is built to hold natural and dynamic conversations with customers, |
3,141 | natural and dynamic conversations with customers, providing personalized service depending on the customer's needs.\n\n7. [Hebbia](https://www.hebbia.ai/): Hebbia is reinventing search with cutting-edge AI, retrieving every answer, even insights humans overlook.\n\n8. [AI.XYZ](https://www.ai.xyz/): AI.XYZ allows users to design their own AI, tackling information overload and providing support and inspiration throughout the day.\n\n9. [Halist AI](https://halist.ai/): Halist AI provides optimized access to ChatGPT, powered by OpenAI GPT-3 and GPT-4, on mobile.\n\n10. [Airin](https://airin.ai/): Airin clones how your top expert solves problems in as little as 2 hours, creating an AI companion for the rest of your team. It automates remote coaching for new hires and dramatically reduces time to productivity.\n"Using the tool wrapper‚ÄãThis is the old way of using Metaphor - through our own in-house integration.from langchain.utilities import MetaphorSearchAPIWrappersearch = MetaphorSearchAPIWrapper()Call the API‚Äãresults takes in a Metaphor-optimized search query and a number of results (up to 500). It returns a list of results with title, url, author, and creation date.search.results("The best blog post about AI safety is definitely this: ", 10) [{'title': 'Core Views on AI Safety: When, Why, What, and How', 'url': 'https://www.anthropic.com/index/core-views-on-ai-safety', 'author': None, 'published_date': '2023-03-08'}, {'title': 'Extinction Risk from Artificial Intelligence', 'url': 'https://aisafety.wordpress.com/', 'author': None, 'published_date': '2013-10-08'}, {'title': 'The simple picture on AI safety - LessWrong', 'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety', 'author': 'Alex Flint', 'published_date': '2018-05-27'}, {'title': 'No Time Like The Present For AI Safety Work', 'url': | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: natural and dynamic conversations with customers, providing personalized service depending on the customer's needs.\n\n7. [Hebbia](https://www.hebbia.ai/): Hebbia is reinventing search with cutting-edge AI, retrieving every answer, even insights humans overlook.\n\n8. [AI.XYZ](https://www.ai.xyz/): AI.XYZ allows users to design their own AI, tackling information overload and providing support and inspiration throughout the day.\n\n9. [Halist AI](https://halist.ai/): Halist AI provides optimized access to ChatGPT, powered by OpenAI GPT-3 and GPT-4, on mobile.\n\n10. [Airin](https://airin.ai/): Airin clones how your top expert solves problems in as little as 2 hours, creating an AI companion for the rest of your team. It automates remote coaching for new hires and dramatically reduces time to productivity.\n"Using the tool wrapper‚ÄãThis is the old way of using Metaphor - through our own in-house integration.from langchain.utilities import MetaphorSearchAPIWrappersearch = MetaphorSearchAPIWrapper()Call the API‚Äãresults takes in a Metaphor-optimized search query and a number of results (up to 500). It returns a list of results with title, url, author, and creation date.search.results("The best blog post about AI safety is definitely this: ", 10) [{'title': 'Core Views on AI Safety: When, Why, What, and How', 'url': 'https://www.anthropic.com/index/core-views-on-ai-safety', 'author': None, 'published_date': '2023-03-08'}, {'title': 'Extinction Risk from Artificial Intelligence', 'url': 'https://aisafety.wordpress.com/', 'author': None, 'published_date': '2013-10-08'}, {'title': 'The simple picture on AI safety - LessWrong', 'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety', 'author': 'Alex Flint', 'published_date': '2018-05-27'}, {'title': 'No Time Like The Present For AI Safety Work', 'url': |
3,142 | Like The Present For AI Safety Work', 'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/', 'author': None, 'published_date': '2015-05-29'}, {'title': 'A plea for solutionism on AI safety - LessWrong', 'url': 'https://www.lesswrong.com/posts/ASMX9ss3J5G3GZdok/a-plea-for-solutionism-on-ai-safety', 'author': 'Jasoncrawford', 'published_date': '2023-06-09'}, {'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why', 'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html', 'author': 'Tim Urban', 'published_date': '2015-01-22'}, {'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum', 'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how', 'author': 'Jonmenaster', 'published_date': '2023-03-09'}, {'title': "[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2 - LessWrong", 'url': 'https://www.lesswrong.com/posts/QnBZkNJNbJK9k5Xi7/linkpost-sam-altman-s-2015-blog-posts-machine-intelligence', 'author': 'Olivia Jimenez', 'published_date': '2023-04-28'}, {'title': 'The Proof of Doom - LessWrong', 'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom', 'author': 'Johnlawrenceaspden', 'published_date': '2022-03-09'}, {'title': "Anthropic's Core Views on AI Safety - LessWrong", 'url': 'https://www.lesswrong.com/posts/xhKr5KtvdJRssMeJ3/anthropic-s-core-views-on-ai-safety', 'author': 'Zac Hatfield-Dodds', 'published_date': '2023-03-09'}]Adding filters‚ÄãWe can also add filters to our search. include_domains: Optional[List[str]] - List of domains to include in the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.exclude_domains: Optional[List[str]] - List | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Like The Present For AI Safety Work', 'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/', 'author': None, 'published_date': '2015-05-29'}, {'title': 'A plea for solutionism on AI safety - LessWrong', 'url': 'https://www.lesswrong.com/posts/ASMX9ss3J5G3GZdok/a-plea-for-solutionism-on-ai-safety', 'author': 'Jasoncrawford', 'published_date': '2023-06-09'}, {'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why', 'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html', 'author': 'Tim Urban', 'published_date': '2015-01-22'}, {'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum', 'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how', 'author': 'Jonmenaster', 'published_date': '2023-03-09'}, {'title': "[Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2 - LessWrong", 'url': 'https://www.lesswrong.com/posts/QnBZkNJNbJK9k5Xi7/linkpost-sam-altman-s-2015-blog-posts-machine-intelligence', 'author': 'Olivia Jimenez', 'published_date': '2023-04-28'}, {'title': 'The Proof of Doom - LessWrong', 'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom', 'author': 'Johnlawrenceaspden', 'published_date': '2022-03-09'}, {'title': "Anthropic's Core Views on AI Safety - LessWrong", 'url': 'https://www.lesswrong.com/posts/xhKr5KtvdJRssMeJ3/anthropic-s-core-views-on-ai-safety', 'author': 'Zac Hatfield-Dodds', 'published_date': '2023-03-09'}]Adding filters‚ÄãWe can also add filters to our search. include_domains: Optional[List[str]] - List of domains to include in the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.exclude_domains: Optional[List[str]] - List |
3,143 | Optional[List[str]] - List of domains to exclude in the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.start_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If start_crawl_date is specified, results will only include links that were crawled after start_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)end_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If endCrawlDate is specified, results will only include links that were crawled before end_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)start_published_date: Optional[str] - If specified, only links with a published date after start_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if start_published_date is specified.end_published_date: Optional[str] - If specified, only links with a published date before end_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if end_published_date is specified.See full docs here.search.results( "The best blog post about AI safety is definitely this: ", 10, include_domains=["lesswrong.com"], start_published_date="2019-01-01",)Use Metaphor as a tool‚ÄãMetaphor can be used as a tool that gets URLs that other tools such as browsing tools.from langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, # A synchronous browser is available, though it | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: Optional[List[str]] - List of domains to exclude in the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.start_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If start_crawl_date is specified, results will only include links that were crawled after start_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)end_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If endCrawlDate is specified, results will only include links that were crawled before end_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)start_published_date: Optional[str] - If specified, only links with a published date after start_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if start_published_date is specified.end_published_date: Optional[str] - If specified, only links with a published date before end_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if end_published_date is specified.See full docs here.search.results( "The best blog post about AI safety is definitely this: ", 10, include_domains=["lesswrong.com"], start_published_date="2019-01-01",)Use Metaphor as a tool‚ÄãMetaphor can be used as a tool that gets URLs that other tools such as browsing tools.from langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, # A synchronous browser is available, though it |
3,144 | # A synchronous browser is available, though it isn't compatible with jupyter.)async_browser = create_async_playwright_browser()toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = toolkit.get_tools()tools_by_name = {tool.name: tool for tool in tools}print(tools_by_name.keys())navigate_tool = tools_by_name["navigate_browser"]extract_text = tools_by_name["extract_text"]from langchain.agents import initialize_agent, AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.tools import MetaphorSearchResultsllm = ChatOpenAI(model_name="gpt-4", temperature=0.7)metaphor_tool = MetaphorSearchResults(api_wrapper=search)agent_chain = initialize_agent( [metaphor_tool, extract_text, navigate_tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent_chain.run( "find me an interesting tweet about AI safety using Metaphor, then tell me the first sentence in the post. Do not finish until able to retrieve the first sentence.")PreviousLemon AgentNextNuclia UnderstandingUsing their SDKUse in an agentUsing the tool wrapperCall the APIAdding filtersUse Metaphor as a toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. | Metaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page. ->: # A synchronous browser is available, though it isn't compatible with jupyter.)async_browser = create_async_playwright_browser()toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = toolkit.get_tools()tools_by_name = {tool.name: tool for tool in tools}print(tools_by_name.keys())navigate_tool = tools_by_name["navigate_browser"]extract_text = tools_by_name["extract_text"]from langchain.agents import initialize_agent, AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.tools import MetaphorSearchResultsllm = ChatOpenAI(model_name="gpt-4", temperature=0.7)metaphor_tool = MetaphorSearchResults(api_wrapper=search)agent_chain = initialize_agent( [metaphor_tool, extract_text, navigate_tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent_chain.run( "find me an interesting tweet about AI safety using Metaphor, then tell me the first sentence in the post. Do not finish until able to retrieve the first sentence.")PreviousLemon AgentNextNuclia UnderstandingUsing their SDKUse in an agentUsing the tool wrapperCall the APIAdding filtersUse Metaphor as a toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,145 | Wolfram Alpha | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsWolfram AlphaWolfram AlphaThis notebook goes over how to use the wolfram alpha component.First, you need to set up your Wolfram Alpha developer account and get your APP ID:Go to wolfram alpha and sign up for a developer account hereCreate an app and get your APP IDpip install wolframalphaThen we will need to set some environment variables:Save your APP ID into WOLFRAM_ALPHA_APPID env variablepip install wolframalphaimport osos.environ["WOLFRAM_ALPHA_APPID"] = ""from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapperwolfram = WolframAlphaAPIWrapper()wolfram.run("What is 2x+5 = -3x + 7?") 'x = 2/5'PreviousWikipediaNextYahoo Finance NewsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. | This notebook goes over how to use the wolfram alpha component. | This notebook goes over how to use the wolfram alpha component. ->: Wolfram Alpha | ü¶úÔ∏èüîó Langchain
Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsWolfram AlphaWolfram AlphaThis notebook goes over how to use the wolfram alpha component.First, you need to set up your Wolfram Alpha developer account and get your APP ID:Go to wolfram alpha and sign up for a developer account hereCreate an app and get your APP IDpip install wolframalphaThen we will need to set some environment variables:Save your APP ID into WOLFRAM_ALPHA_APPID env variablepip install wolframalphaimport osos.environ["WOLFRAM_ALPHA_APPID"] = ""from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapperwolfram = WolframAlphaAPIWrapper()wolfram.run("What is 2x+5 = -3x + 7?") 'x = 2/5'PreviousWikipediaNextYahoo Finance NewsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ¬© 2023 LangChain, Inc. |
3,146 | Apify | ü¶úÔ∏èüîó Langchain | This notebook shows how to use the Apify integration for LangChain. | This notebook shows how to use the Apify integration for LangChain. ->: Apify | ü¶úÔ∏èüîó Langchain |
3,147 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsApifyApifyThis notebook shows how to use the Apify integration for LangChain.Apify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.
For example, you can use it to extract Google Search results, Instagram and Facebook profiles, products from Amazon or Shopify, Google Maps reviews, etc. etc.In this example, we'll use the Website Content Crawler Actor,
which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, | This notebook shows how to use the Apify integration for LangChain. | This notebook shows how to use the Apify integration for LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsApifyApifyThis notebook shows how to use the Apify integration for LangChain.Apify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.
For example, you can use it to extract Google Search results, Instagram and Facebook profiles, products from Amazon or Shopify, Google Maps reviews, etc. etc.In this example, we'll use the Website Content Crawler Actor,
which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, |
3,148 | and extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it.#!pip install apify-client openai langchain chromadb tiktokenFirst, import ApifyWrapper into your source code:from langchain.document_loaders.base import Documentfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.utilities import ApifyWrapperInitialize it using your Apify API token and for the purpose of this example, also with your OpenAI API key:import osos.environ["OPENAI_API_KEY"] = "Your OpenAI API key"os.environ["APIFY_API_TOKEN"] = "Your Apify API token"apify = ApifyWrapper()Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.Note that if you already have some results in an Apify dataset, you can load them directly using ApifyDatasetLoader, as shown in this notebook. In that notebook, you'll also find the explanation of the dataset_mapping_function, which is used to map fields from the Apify dataset records to LangChain Document fields.loader = apify.call_actor( actor_id="apify/website-content-crawler", run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)Initialize the vector index from the crawled documents:index = VectorstoreIndexCreator().from_loaders([loader])And finally, query the vector index:query = "What is LangChain?"result = index.query_with_sources(query)print(result["answer"])print(result["sources"]) LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities. https://python.langchain.com/en/latest/modules/models/llms.html, | This notebook shows how to use the Apify integration for LangChain. | This notebook shows how to use the Apify integration for LangChain. ->: and extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it.#!pip install apify-client openai langchain chromadb tiktokenFirst, import ApifyWrapper into your source code:from langchain.document_loaders.base import Documentfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.utilities import ApifyWrapperInitialize it using your Apify API token and for the purpose of this example, also with your OpenAI API key:import osos.environ["OPENAI_API_KEY"] = "Your OpenAI API key"os.environ["APIFY_API_TOKEN"] = "Your Apify API token"apify = ApifyWrapper()Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.Note that if you already have some results in an Apify dataset, you can load them directly using ApifyDatasetLoader, as shown in this notebook. In that notebook, you'll also find the explanation of the dataset_mapping_function, which is used to map fields from the Apify dataset records to LangChain Document fields.loader = apify.call_actor( actor_id="apify/website-content-crawler", run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)Initialize the vector index from the crawled documents:index = VectorstoreIndexCreator().from_loaders([loader])And finally, query the vector index:query = "What is LangChain?"result = index.query_with_sources(query)print(result["answer"])print(result["sources"]) LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities. https://python.langchain.com/en/latest/modules/models/llms.html, |
3,149 | https://python.langchain.com/en/latest/getting_started/getting_started.htmlPreviousAlpha VantageNextArXivCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook shows how to use the Apify integration for LangChain. | This notebook shows how to use the Apify integration for LangChain. ->: https://python.langchain.com/en/latest/getting_started/getting_started.htmlPreviousAlpha VantageNextArXivCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,150 | DataForSeo | ü¶úÔ∏èüîó Langchain | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. ->: DataForSeo | ü¶úÔ∏èüîó Langchain |
3,151 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsDataForSeoOn this pageDataForSeoThis notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc.from langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapperSetting up the API credentials‚ÄãYou can obtain your API credentials by registering on the DataForSeo website.import osos.environ["DATAFORSEO_LOGIN"] = "your_api_access_username"os.environ["DATAFORSEO_PASSWORD"] = "your_api_access_password"wrapper = DataForSeoAPIWrapper()The run method will return the first result snippet from one of the following elements: answer_box, knowledge_graph, featured_snippet, shopping, organic.wrapper.run("Weather in Los Angeles")The Difference Between run and results‚Äãrun and results are two methods provided by the DataForSeoAPIWrapper class.The run method executes the search and returns the first result snippet from the answer box, knowledge graph, featured snippet, shopping, or organic results. These | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsDataForSeoOn this pageDataForSeoThis notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc.from langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapperSetting up the API credentials‚ÄãYou can obtain your API credentials by registering on the DataForSeo website.import osos.environ["DATAFORSEO_LOGIN"] = "your_api_access_username"os.environ["DATAFORSEO_PASSWORD"] = "your_api_access_password"wrapper = DataForSeoAPIWrapper()The run method will return the first result snippet from one of the following elements: answer_box, knowledge_graph, featured_snippet, shopping, organic.wrapper.run("Weather in Los Angeles")The Difference Between run and results‚Äãrun and results are two methods provided by the DataForSeoAPIWrapper class.The run method executes the search and returns the first result snippet from the answer box, knowledge graph, featured snippet, shopping, or organic results. These |
3,152 | snippet, shopping, or organic results. These elements are sorted by priority from highest to lowest.The results method returns a JSON response configured according to the parameters set in the wrapper. This allows for more flexibility in terms of what data you want to return from the API.Getting Results as JSON‚ÄãYou can customize the result types and fields you want to return in the JSON response. You can also set a maximum count for the number of top results to return.json_wrapper = DataForSeoAPIWrapper( json_result_types=["organic", "knowledge_graph", "answer_box"], json_result_fields=["type", "title", "description", "text"], top_count=3,)json_wrapper.results("Bill Gates")Customizing Location and Language‚ÄãYou can specify the location and language of your search results by passing additional parameters to the API wrapper.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en"},)customized_wrapper.results("coffee near me")Customizing the Search Engine‚ÄãYou can also specify the search engine you want to use.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en", "se_name": "bing"},)customized_wrapper.results("coffee near me")Customizing the Search Type‚ÄãThe API wrapper also allows you to specify the type of search you want to perform. For example, you can perform a maps search.maps_search = DataForSeoAPIWrapper( top_count=10, json_result_fields=["title", "value", "address", "rating", "type"], params={ "location_coordinate": "52.512,13.36,12z", "language_code": "en", "se_type": "maps", },)maps_search.results("coffee near me")Integration with Langchain Agents‚ÄãYou can use the Tool class | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. ->: snippet, shopping, or organic results. These elements are sorted by priority from highest to lowest.The results method returns a JSON response configured according to the parameters set in the wrapper. This allows for more flexibility in terms of what data you want to return from the API.Getting Results as JSON‚ÄãYou can customize the result types and fields you want to return in the JSON response. You can also set a maximum count for the number of top results to return.json_wrapper = DataForSeoAPIWrapper( json_result_types=["organic", "knowledge_graph", "answer_box"], json_result_fields=["type", "title", "description", "text"], top_count=3,)json_wrapper.results("Bill Gates")Customizing Location and Language‚ÄãYou can specify the location and language of your search results by passing additional parameters to the API wrapper.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en"},)customized_wrapper.results("coffee near me")Customizing the Search Engine‚ÄãYou can also specify the search engine you want to use.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en", "se_name": "bing"},)customized_wrapper.results("coffee near me")Customizing the Search Type‚ÄãThe API wrapper also allows you to specify the type of search you want to perform. For example, you can perform a maps search.maps_search = DataForSeoAPIWrapper( top_count=10, json_result_fields=["title", "value", "address", "rating", "type"], params={ "location_coordinate": "52.512,13.36,12z", "language_code": "en", "se_type": "maps", },)maps_search.results("coffee near me")Integration with Langchain Agents‚ÄãYou can use the Tool class |
3,153 | Langchain Agents​You can use the Tool class from the langchain.agents module to integrate the DataForSeoAPIWrapper with a langchain agent. The Tool class encapsulates a function that the agent can call.from langchain.agents import Toolsearch = DataForSeoAPIWrapper( top_count=3, json_result_types=["organic"], json_result_fields=["title", "description", "type"],)tool = Tool( name="google-search-answer", description="My new answer tool", func=search.run,)json_tool = Tool( name="google-search-json", description="My new json tool", func=search.results,)PreviousDall-E Image GeneratorNextDuckDuckGo SearchSetting up the API credentialsThe Difference Between run and resultsGetting Results as JSONCustomizing Location and LanguageCustomizing the Search EngineCustomizing the Search TypeIntegration with Langchain AgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. | This notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc. ->: Langchain Agents​You can use the Tool class from the langchain.agents module to integrate the DataForSeoAPIWrapper with a langchain agent. The Tool class encapsulates a function that the agent can call.from langchain.agents import Toolsearch = DataForSeoAPIWrapper( top_count=3, json_result_types=["organic"], json_result_fields=["title", "description", "type"],)tool = Tool( name="google-search-answer", description="My new answer tool", func=search.run,)json_tool = Tool( name="google-search-json", description="My new json tool", func=search.results,)PreviousDall-E Image GeneratorNextDuckDuckGo SearchSetting up the API credentialsThe Difference Between run and resultsGetting Results as JSONCustomizing Location and LanguageCustomizing the Search EngineCustomizing the Search TypeIntegration with Langchain AgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,154 | Bearly Code Interpreter | ü¶úÔ∏èüîó Langchain | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: Bearly Code Interpreter | ü¶úÔ∏èüîó Langchain |
3,155 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsBearly Code InterpreterBearly Code InterpreterBearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code InterpreterIn this notebook, we will create an example of an agent that uses Bearly to interact with datafrom langchain.chat_models import ChatOpenAIfrom langchain.tools import BearlyInterpreterToolfrom langchain.agents import initialize_agent, AgentTypeInitialize the interpreterbearly_tool = BearlyInterpreterTool(api_key="...")Let's add some files to the the sandboxbearly_tool.add_file( source_path="sample_data/Bristol.pdf", target_path="Bristol.pdf", description="")bearly_tool.add_file( source_path="sample_data/US_GDP.csv", target_path="US_GDP.csv", description="")Create a Tool object now. This is necessary, because we added the files, and we want the tool description to reflect thattools = [bearly_tool.as_tool()]tools[0].name 'bearly_interpreter'print(tools[0].description) Evaluates python code in a sandbox environment. The environment resets on every execution. You must | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsBearly Code InterpreterBearly Code InterpreterBearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code InterpreterIn this notebook, we will create an example of an agent that uses Bearly to interact with datafrom langchain.chat_models import ChatOpenAIfrom langchain.tools import BearlyInterpreterToolfrom langchain.agents import initialize_agent, AgentTypeInitialize the interpreterbearly_tool = BearlyInterpreterTool(api_key="...")Let's add some files to the the sandboxbearly_tool.add_file( source_path="sample_data/Bristol.pdf", target_path="Bristol.pdf", description="")bearly_tool.add_file( source_path="sample_data/US_GDP.csv", target_path="US_GDP.csv", description="")Create a Tool object now. This is necessary, because we added the files, and we want the tool description to reflect thattools = [bearly_tool.as_tool()]tools[0].name 'bearly_interpreter'print(tools[0].description) Evaluates python code in a sandbox environment. The environment resets on every execution. You must |
3,156 | environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. The code should NOT be wrapped in backticks. All python packages including requests, matplotlib, scipy, numpy, pandas, etc are available. If you have any files outputted write them to "output/" relative to the execution path. Output can only be read from the directory, stdout, and stdin. Do not use things like plot.show() as it will not work instead write them out `output/` and a link to the file will be returned. print() any output and results so you can capture the output. The following files available in the evaluation environment: - path: `Bristol.pdf` first four lines: [] description: `` - path: `US_GDP.csv` first four lines: ['DATE,GDP\n', '1947-01-01,243.164\n', '1947-04-01,245.968\n', '1947-07-01,249.585\n'] description: ``Initialize an agentllm = ChatOpenAI(model="gpt-4", temperature=0)agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, handle_parsing_errors=True)# Extract pdf contentagent.run("What is the text on page 3 of the pdf?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import PyPDF2\n\n# Open the PDF file in read-binary mode\npdf_file = open('Bristol.pdf', 'rb')\n\n# Create a PDF file reader object\npdf_reader = PyPDF2.PdfFileReader(pdf_file)\n\n# Get the text from page 3\npage_obj = pdf_reader.getPage(2)\npage_text = page_obj.extractText()\n\n# Close the PDF file\npdf_file.close()\n\nprint(page_text)"}` {'stdout': '', 'stderr': 'Traceback (most recent call last):\n File "/tmp/project/main.py", line 7, in <module>\n pdf_reader = PyPDF2.PdfFileReader(pdf_file)\n File "/venv/lib/python3.10/site-packages/PyPDF2/_reader.py", line 1974, in __init__\n | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. The code should NOT be wrapped in backticks. All python packages including requests, matplotlib, scipy, numpy, pandas, etc are available. If you have any files outputted write them to "output/" relative to the execution path. Output can only be read from the directory, stdout, and stdin. Do not use things like plot.show() as it will not work instead write them out `output/` and a link to the file will be returned. print() any output and results so you can capture the output. The following files available in the evaluation environment: - path: `Bristol.pdf` first four lines: [] description: `` - path: `US_GDP.csv` first four lines: ['DATE,GDP\n', '1947-01-01,243.164\n', '1947-04-01,245.968\n', '1947-07-01,249.585\n'] description: ``Initialize an agentllm = ChatOpenAI(model="gpt-4", temperature=0)agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True, handle_parsing_errors=True)# Extract pdf contentagent.run("What is the text on page 3 of the pdf?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import PyPDF2\n\n# Open the PDF file in read-binary mode\npdf_file = open('Bristol.pdf', 'rb')\n\n# Create a PDF file reader object\npdf_reader = PyPDF2.PdfFileReader(pdf_file)\n\n# Get the text from page 3\npage_obj = pdf_reader.getPage(2)\npage_text = page_obj.extractText()\n\n# Close the PDF file\npdf_file.close()\n\nprint(page_text)"}` {'stdout': '', 'stderr': 'Traceback (most recent call last):\n File "/tmp/project/main.py", line 7, in <module>\n pdf_reader = PyPDF2.PdfFileReader(pdf_file)\n File "/venv/lib/python3.10/site-packages/PyPDF2/_reader.py", line 1974, in __init__\n |
3,157 | line 1974, in __init__\n deprecation_with_replacement("PdfFileReader", "PdfReader", "3.0.0")\n File "/venv/lib/python3.10/site-packages/PyPDF2/_utils.py", line 369, in deprecation_with_replacement\n deprecation(DEPR_MSG_HAPPENED.format(old_name, removed_in, new_name))\n File "/venv/lib/python3.10/site-packages/PyPDF2/_utils.py", line 351, in deprecation\n raise DeprecationError(msg)\nPyPDF2.errors.DeprecationError: PdfFileReader is deprecated and was removed in PyPDF2 3.0.0. Use PdfReader instead.\n', 'fileLinks': [], 'exitCode': 1} Invoking: `bearly_interpreter` with `{'python_code': "from PyPDF2 import PdfReader\n\n# Open the PDF file\npdf = PdfReader('Bristol.pdf')\n\n# Get the text from page 3\npage = pdf.pages[2]\npage_text = page.extract_text()\n\nprint(page_text)"}` {'stdout': '1 COVID-19 at Work: \nExposing h ow risk is assessed and its consequences in England and Sweden \nPeter Andersson and Tonia Novitz* \n1.Introduction\nT\nhe crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately \ncentred on risk. Predictions ha d to be made swiftly regarding how it would spread, who it might \naffect and what measures could be taken to prevent exposure in everyday so cial interaction, \nincluding in the workplace. This was in no way a straightforward assessment, because initially so \nmuch was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is \nevident that not all those exposed to COVID-19 become ill, and many who contract the virus remain \nasymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre -existing medical conditions (such as diabetes), but \nalso with poverty which correlates to the extent of exposure in certain occupations.\n1 Some risks \narise which remain less predictable, | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: line 1974, in __init__\n deprecation_with_replacement("PdfFileReader", "PdfReader", "3.0.0")\n File "/venv/lib/python3.10/site-packages/PyPDF2/_utils.py", line 369, in deprecation_with_replacement\n deprecation(DEPR_MSG_HAPPENED.format(old_name, removed_in, new_name))\n File "/venv/lib/python3.10/site-packages/PyPDF2/_utils.py", line 351, in deprecation\n raise DeprecationError(msg)\nPyPDF2.errors.DeprecationError: PdfFileReader is deprecated and was removed in PyPDF2 3.0.0. Use PdfReader instead.\n', 'fileLinks': [], 'exitCode': 1} Invoking: `bearly_interpreter` with `{'python_code': "from PyPDF2 import PdfReader\n\n# Open the PDF file\npdf = PdfReader('Bristol.pdf')\n\n# Get the text from page 3\npage = pdf.pages[2]\npage_text = page.extract_text()\n\nprint(page_text)"}` {'stdout': '1 COVID-19 at Work: \nExposing h ow risk is assessed and its consequences in England and Sweden \nPeter Andersson and Tonia Novitz* \n1.Introduction\nT\nhe crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately \ncentred on risk. Predictions ha d to be made swiftly regarding how it would spread, who it might \naffect and what measures could be taken to prevent exposure in everyday so cial interaction, \nincluding in the workplace. This was in no way a straightforward assessment, because initially so \nmuch was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is \nevident that not all those exposed to COVID-19 become ill, and many who contract the virus remain \nasymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre -existing medical conditions (such as diabetes), but \nalso with poverty which correlates to the extent of exposure in certain occupations.\n1 Some risks \narise which remain less predictable, |
3,158 | risks \narise which remain less predictable, as previously healthy people with no signs of particular \nvulnerability can experience serious long term illness as well and in rare cases will even die.2 \nPerceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms.\n3 Use of testing and vaccines \nalso bec ame part of the remedial landscape, with their availability and administration being \n*This paper is part of the project An i nclusive and sustainable Swedish labour law – the way\nahead, dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas\nBruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled\n‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published\nin the (2021) Comparative Labour and Social Security Review /\n Revue de droit comparé du\ntravail et de la sécurité sociale.\n1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 -\nhttps://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file\n/890258/disparities_review.pdf.\n2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020)\n584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’\n(2020) BMJ 370.\n3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia,\nAnne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de\nBruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: risks \narise which remain less predictable, as previously healthy people with no signs of particular \nvulnerability can experience serious long term illness as well and in rare cases will even die.2 \nPerceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms.\n3 Use of testing and vaccines \nalso bec ame part of the remedial landscape, with their availability and administration being \n*This paper is part of the project An i nclusive and sustainable Swedish labour law – the way\nahead, dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas\nBruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled\n‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published\nin the (2021) Comparative Labour and Social Security Review /\n Revue de droit comparé du\ntravail et de la sécurité sociale.\n1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 -\nhttps://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file\n/890258/disparities_review.pdf.\n2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020)\n584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’\n(2020) BMJ 370.\n3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia,\nAnne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de\nBruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk |
3,159 | ‘Relationships between initial COVID -19 risk perceptions and\nprotective health behaviors: A national survey’ (2020) 59(2) American Journal of Prev entive\nMedicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19:\nAnthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law Journal 539.\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}The text on page 3 of the PDF is: "1 COVID-19 at Work: Exposing how risk is assessed and its consequences in England and Sweden Peter Andersson and Tonia Novitz* 1.Introduction The crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately centred on risk. Predictions had to be made swiftly regarding how it would spread, who it might affect and what measures could be taken to prevent exposure in everyday social interaction, including in the workplace. This was in no way a straightforward assessment, because initially so much was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is evident that not all those exposed to COVID-19 become ill, and many who contract the virus remain asymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre-existing medical conditions (such as diabetes), but also with poverty which correlates to the extent of exposure in certain occupations. 1 Some risks arise which remain less predictable, as previously healthy people with no signs of particular vulnerability can experience serious long term illness as well and in rare cases will even die.2 Perceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: ‘Relationships between initial COVID -19 risk perceptions and\nprotective health behaviors: A national survey’ (2020) 59(2) American Journal of Prev entive\nMedicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19:\nAnthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law Journal 539.\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}The text on page 3 of the PDF is: "1 COVID-19 at Work: Exposing how risk is assessed and its consequences in England and Sweden Peter Andersson and Tonia Novitz* 1.Introduction The crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately centred on risk. Predictions had to be made swiftly regarding how it would spread, who it might affect and what measures could be taken to prevent exposure in everyday social interaction, including in the workplace. This was in no way a straightforward assessment, because initially so much was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is evident that not all those exposed to COVID-19 become ill, and many who contract the virus remain asymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre-existing medical conditions (such as diabetes), but also with poverty which correlates to the extent of exposure in certain occupations. 1 Some risks arise which remain less predictable, as previously healthy people with no signs of particular vulnerability can experience serious long term illness as well and in rare cases will even die.2 Perceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as |
3,160 | of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms. 3 Use of testing and vaccines also became part of the remedial landscape, with their availability and administration being *This paper is part of the project An inclusive and sustainable Swedish labour law – the way ahead, dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas Bruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled ‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published in the (2021) Comparative Labour and Social Security Review / Revue de droit comparé du travail et de la sécurité sociale. 1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 - https://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file /890258/disparities_review.pdf. 2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020) 584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’ (2020) BMJ 370. 3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia, Anne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de Bruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk perceptions and protective health behaviors: A national survey’ (2020) 59(2) American Journal of Preventive Medicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19: Anthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms. 3 Use of testing and vaccines also became part of the remedial landscape, with their availability and administration being *This paper is part of the project An inclusive and sustainable Swedish labour law – the way ahead, dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas Bruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled ‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published in the (2021) Comparative Labour and Social Security Review / Revue de droit comparé du travail et de la sécurité sociale. 1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 - https://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file /890258/disparities_review.pdf. 2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020) 584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’ (2020) BMJ 370. 3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia, Anne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de Bruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk perceptions and protective health behaviors: A national survey’ (2020) 59(2) American Journal of Preventive Medicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19: Anthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law |
3,161 | of the Social State’ (2020)49(4) Industrial Law Journal 539." > Finished chain. 'The text on page 3 of the PDF is:\n\n"1 COVID-19 at Work: \nExposing how risk is assessed and its consequences in England and Sweden \nPeter Andersson and Tonia Novitz* \n1.Introduction\nThe crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately \ncentred on risk. Predictions had to be made swiftly regarding how it would spread, who it might \naffect and what measures could be taken to prevent exposure in everyday social interaction, \nincluding in the workplace. This was in no way a straightforward assessment, because initially so \nmuch was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is \nevident that not all those exposed to COVID-19 become ill, and many who contract the virus remain \nasymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre-existing medical conditions (such as diabetes), but \nalso with poverty which correlates to the extent of exposure in certain occupations.\n1 Some risks \narise which remain less predictable, as previously healthy people with no signs of particular \nvulnerability can experience serious long term illness as well and in rare cases will even die.2 \nPerceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms.\n3 Use of testing and vaccines \nalso became part of the remedial landscape, with their availability and administration being \n*This paper is part of the project An inclusive and sustainable Swedish labour law – the way\nahead, dnr. 2017-03134 financed by the Swedish research | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: of the Social State’ (2020)49(4) Industrial Law Journal 539." > Finished chain. 'The text on page 3 of the PDF is:\n\n"1 COVID-19 at Work: \nExposing how risk is assessed and its consequences in England and Sweden \nPeter Andersson and Tonia Novitz* \n1.Introduction\nThe crisis which arose suddenly at the beginning of 2020 relating to coronavirus was immediately \ncentred on risk. Predictions had to be made swiftly regarding how it would spread, who it might \naffect and what measures could be taken to prevent exposure in everyday social interaction, \nincluding in the workplace. This was in no way a straightforward assessment, because initially so \nmuch was unknown. Those gaps in our knowledge have since, partially, been ameliorated. It is \nevident that not all those exposed to COVID-19 become ill, and many who contract the virus remain \nasymptomatic, so that the odds on becoming seriously ill may seem small. But those odds are also stacked against certain segments of the population. The likelihood of mortality and morbidity are associated with age and ethnicity as well as pre-existing medical conditions (such as diabetes), but \nalso with poverty which correlates to the extent of exposure in certain occupations.\n1 Some risks \narise which remain less predictable, as previously healthy people with no signs of particular \nvulnerability can experience serious long term illness as well and in rare cases will even die.2 \nPerceptions of risk in different countries have led to particular measures taken, ranging from handwashing to social distancing, use of personal protective equipment (PPE) such as face coverings, and even ‘lockdowns’ which have taken various forms.\n3 Use of testing and vaccines \nalso became part of the remedial landscape, with their availability and administration being \n*This paper is part of the project An inclusive and sustainable Swedish labour law – the way\nahead, dnr. 2017-03134 financed by the Swedish research |
3,162 | dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas\nBruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled\n‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published\nin the (2021) Comparative Labour and Social Security Review /\n Revue de droit comparé du\ntravail et de la sécurité sociale.\n1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 -\nhttps://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file\n/890258/disparities_review.pdf.\n2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020)\n584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’\n(2020) BMJ 370.\n3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia,\nAnne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de\nBruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk perceptions and\nprotective health behaviors: A national survey’ (2020) 59(2) American Journal of Preventive\nMedicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19:\nAnthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law Journal 539."'# Simple Queriesagent.run("What was the US GDP in 2019?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import pandas as pd\n\n# Load the data\nus_gdp = pd.read_csv('US_GDP.csv')\n\n# Convert the 'DATE' column to datetime\nus_gdp['DATE'] = pd.to_datetime(us_gdp['DATE'])\n\n# Filter the data for the year | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: dnr. 2017-03134 financed by the Swedish research council led by Petra Herzfeld Olssonat Stockholm University. The authors would like to thank her and other participants, Niklas\nBruun and Erik Sjödin for their helpful comments on earlier drafts. A much shorter article titled\n‘Risk Assessment and COVID -19: Systems at work (or not) in England and Sweden’ is published\nin the (2021) Comparative Labour and Social Security Review /\n Revue de droit comparé du\ntravail et de la sécurité sociale.\n1 Public Health England, Disparities in the risk and outcomes of COVID-19 (2 June 2020 -\nhttps://assets.publishing.service.gov.uk/government/uploads/ system /uploads/attachment_data/file\n/890258/disparities_review.pdf.\n2 Nisreen A. Alwan, ‘Track COVID- 19 sickness, not just positive tests and deaths’ ( 2020)\n584.7820 Nature 170- 171; Elisabeth Mahase, ‘Covid-19: What do we know about “long covid”?’\n(2020) BMJ 370.\n3 Sarah Dryhurst, Claudia R. Schneider, John Kerr, Alexandra LJ Freeman, Gabriel Recchia,\nAnne Marthe Van Der Bles, David Spiegelhalter, and Sander van der Linden, ‘Risk perceptionsof COVID-19 around the world’ (2020) 23(7- 8) Journal of Risk Research 994; Wändi Bruine de\nBruin, and Daniel Bennett, ‘Relationships between initial COVID -19 risk perceptions and\nprotective health behaviors: A national survey’ (2020) 59(2) American Journal of Preventive\nMedicine 157; and Simon Deakin and Gaofeng Meng, ‘The Governance of Covid- 19:\nAnthropogenic Risk, Evolutionary Learning, and the Future of the Social State’ (2020)49(4) Industrial Law Journal 539."'# Simple Queriesagent.run("What was the US GDP in 2019?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import pandas as pd\n\n# Load the data\nus_gdp = pd.read_csv('US_GDP.csv')\n\n# Convert the 'DATE' column to datetime\nus_gdp['DATE'] = pd.to_datetime(us_gdp['DATE'])\n\n# Filter the data for the year |
3,163 | Filter the data for the year 2019\nus_gdp_2019 = us_gdp[us_gdp['DATE'].dt.year == 2019]\n\n# Print the GDP for 2019\nprint(us_gdp_2019['GDP'].values)"}` {'stdout': '[21104.133 21384.775 21694.282 21902.39 ]\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}The US GDP for each quarter in 2019 was as follows: - Q1: 21104.133 billion dollars - Q2: 21384.775 billion dollars - Q3: 21694.282 billion dollars - Q4: 21902.39 billion dollars > Finished chain. 'The US GDP for each quarter in 2019 was as follows:\n\n- Q1: 21104.133 billion dollars\n- Q2: 21384.775 billion dollars\n- Q3: 21694.282 billion dollars\n- Q4: 21902.39 billion dollars'# Calculationsagent.run("What would the GDP be in 2030 if the latest GDP number grew by 50%?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import pandas as pd\n\n# Load the data\nus_gdp = pd.read_csv('US_GDP.csv')\n\n# Get the latest GDP\nlatest_gdp = us_gdp['GDP'].iloc[-1]\n\n# Calculate the GDP in 2030 if the latest GDP number grew by 50%\ngdp_2030 = latest_gdp * 1.5\nprint(gdp_2030)"}` {'stdout': '40594.518\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}If the latest GDP number grew by 50%, the GDP in 2030 would be approximately 40,594.518 billion dollars. > Finished chain. 'If the latest GDP number grew by 50%, the GDP in 2030 would be approximately 40,594.518 billion dollars.'# Chart outputagent.run("Create a nice and labeled chart of the GDP growth over time") > Entering new AgentExecutor chain... Could not parse tool input: {'name': 'bearly_interpreter', 'arguments': '{\n "python_code": "\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Load the data\ndf = pd.read_csv(\'US_GDP.csv\')\n\n# Convert the \'DATE\' column to datetime format\ndf[\'DATE\'] = pd.to_datetime(df[\'DATE\'])\n\n# Plot the data\nplt.figure(figsize=(10,6))\nplt.plot(df[\'DATE\'], df[\'GDP\'], label=\'US | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: Filter the data for the year 2019\nus_gdp_2019 = us_gdp[us_gdp['DATE'].dt.year == 2019]\n\n# Print the GDP for 2019\nprint(us_gdp_2019['GDP'].values)"}` {'stdout': '[21104.133 21384.775 21694.282 21902.39 ]\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}The US GDP for each quarter in 2019 was as follows: - Q1: 21104.133 billion dollars - Q2: 21384.775 billion dollars - Q3: 21694.282 billion dollars - Q4: 21902.39 billion dollars > Finished chain. 'The US GDP for each quarter in 2019 was as follows:\n\n- Q1: 21104.133 billion dollars\n- Q2: 21384.775 billion dollars\n- Q3: 21694.282 billion dollars\n- Q4: 21902.39 billion dollars'# Calculationsagent.run("What would the GDP be in 2030 if the latest GDP number grew by 50%?") > Entering new AgentExecutor chain... Invoking: `bearly_interpreter` with `{'python_code': "import pandas as pd\n\n# Load the data\nus_gdp = pd.read_csv('US_GDP.csv')\n\n# Get the latest GDP\nlatest_gdp = us_gdp['GDP'].iloc[-1]\n\n# Calculate the GDP in 2030 if the latest GDP number grew by 50%\ngdp_2030 = latest_gdp * 1.5\nprint(gdp_2030)"}` {'stdout': '40594.518\n', 'stderr': '', 'fileLinks': [], 'exitCode': 0}If the latest GDP number grew by 50%, the GDP in 2030 would be approximately 40,594.518 billion dollars. > Finished chain. 'If the latest GDP number grew by 50%, the GDP in 2030 would be approximately 40,594.518 billion dollars.'# Chart outputagent.run("Create a nice and labeled chart of the GDP growth over time") > Entering new AgentExecutor chain... Could not parse tool input: {'name': 'bearly_interpreter', 'arguments': '{\n "python_code": "\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Load the data\ndf = pd.read_csv(\'US_GDP.csv\')\n\n# Convert the \'DATE\' column to datetime format\ndf[\'DATE\'] = pd.to_datetime(df[\'DATE\'])\n\n# Plot the data\nplt.figure(figsize=(10,6))\nplt.plot(df[\'DATE\'], df[\'GDP\'], label=\'US |
3,164 | df[\'GDP\'], label=\'US GDP\')\nplt.xlabel(\'Year\')\nplt.ylabel(\'GDP (in billions)\')\nplt.title(\'US GDP Over Time\')\nplt.legend()\nplt.grid(True)\nplt.savefig(\'output/US_GDP.png\')\n"\n}'} because the `arguments` is not valid JSON.Invalid or incomplete response Invoking: `bearly_interpreter` with `{'python_code': "\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Load the data\ndf = pd.read_csv('US_GDP.csv')\n\n# Convert the 'DATE' column to datetime format\ndf['DATE'] = pd.to_datetime(df['DATE'])\n\n# Plot the data\nplt.figure(figsize=(10,6))\nplt.plot(df['DATE'], df['GDP'], label='US GDP')\nplt.xlabel('Year')\nplt.ylabel('GDP (in billions)')\nplt.title('US GDP Over Time')\nplt.legend()\nplt.grid(True)\nplt.savefig('output/US_GDP.png')\n"}` {'stdout': '', 'stderr': '', 'fileLinks': [{'pathname': 'US_GDP.png', 'tempLink': 'https://bearly-cubby.c559ae877a0a39985f534614a037d899.r2.cloudflarestorage.com/prod/bearly-cubby/temp/interpreter/2023_10/089daf37e9e343ba5ff21afaaa78b967c3466a550b3b11bd5c710c052b559e97/sxhM8gop2AYP88n5uHCsOJ6yTYNQm-HimZ70DcwQ4VI.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=c058d02de50a3cf0bb7e21c8e2d062c5%2F20231010%2F%2Fs3%2Faws4_request&X-Amz-Date=20231010T000000Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=104dc0d4a4b71eeea1030dda1830059920cb0f354fa00197b439eb8565bf141a', 'size': 34275}], 'exitCode': 0}Here is the chart of the US GDP growth over time:  | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: df[\'GDP\'], label=\'US GDP\')\nplt.xlabel(\'Year\')\nplt.ylabel(\'GDP (in billions)\')\nplt.title(\'US GDP Over Time\')\nplt.legend()\nplt.grid(True)\nplt.savefig(\'output/US_GDP.png\')\n"\n}'} because the `arguments` is not valid JSON.Invalid or incomplete response Invoking: `bearly_interpreter` with `{'python_code': "\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Load the data\ndf = pd.read_csv('US_GDP.csv')\n\n# Convert the 'DATE' column to datetime format\ndf['DATE'] = pd.to_datetime(df['DATE'])\n\n# Plot the data\nplt.figure(figsize=(10,6))\nplt.plot(df['DATE'], df['GDP'], label='US GDP')\nplt.xlabel('Year')\nplt.ylabel('GDP (in billions)')\nplt.title('US GDP Over Time')\nplt.legend()\nplt.grid(True)\nplt.savefig('output/US_GDP.png')\n"}` {'stdout': '', 'stderr': '', 'fileLinks': [{'pathname': 'US_GDP.png', 'tempLink': 'https://bearly-cubby.c559ae877a0a39985f534614a037d899.r2.cloudflarestorage.com/prod/bearly-cubby/temp/interpreter/2023_10/089daf37e9e343ba5ff21afaaa78b967c3466a550b3b11bd5c710c052b559e97/sxhM8gop2AYP88n5uHCsOJ6yTYNQm-HimZ70DcwQ4VI.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=c058d02de50a3cf0bb7e21c8e2d062c5%2F20231010%2F%2Fs3%2Faws4_request&X-Amz-Date=20231010T000000Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=104dc0d4a4b71eeea1030dda1830059920cb0f354fa00197b439eb8565bf141a', 'size': 34275}], 'exitCode': 0}Here is the chart of the US GDP growth over time:  |
3,165 | The x-axis represents the year and the y-axis represents the GDP in billions. The line plot shows the growth of the US GDP over time. > Finished chain. 'Here is the chart of the US GDP growth over time:\n\n\n\nThe x-axis represents the year and the y-axis represents the GDP in billions. The line plot shows the growth of the US GDP over time.'PreviousShell (bash)NextBing SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter | Bearly Code Interpreter allows for remote execution of code. This makes it perfect for a code sandbox for agents, to allow for safe implementation of things like Code Interpreter ->: The x-axis represents the year and the y-axis represents the GDP in billions. The line plot shows the growth of the US GDP over time. > Finished chain. 'Here is the chart of the US GDP growth over time:\n\n\n\nThe x-axis represents the year and the y-axis represents the GDP in billions. The line plot shows the growth of the US GDP over time.'PreviousShell (bash)NextBing SearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,166 | File System | ü¶úÔ∏èüîó Langchain | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. ->: File System | ü¶úÔ∏èüîó Langchain |
3,167 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsFile SystemOn this pageFile SystemLangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them.Note: these tools are not recommended for use outside a sandboxed environment! First, we'll import the tools.from langchain.tools.file_management import ( ReadFileTool, CopyFileTool, DeleteFileTool, MoveFileTool, WriteFileTool, ListDirectoryTool,)from langchain.agents.agent_toolkits import FileManagementToolkitfrom tempfile import TemporaryDirectory# We'll make a temporary directory to avoid clutterworking_directory = TemporaryDirectory()The FileManagementToolkit‚ÄãIf you want to provide all the file tooling to your agent, it's easy to do so with the toolkit. We'll pass the temporary directory in as a root directory as a workspace for the LLM.It's recommended to always pass in a root directory, since without one, it's easy for the LLM to pollute the working directory, and without one, there isn't any validation against | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAlpha VantageApifyArXivAWS LambdaShell (bash)Bearly Code InterpreterBing SearchBrave SearchChatGPT PluginsDall-E Image GeneratorDataForSeoDuckDuckGo SearchEden AIEleven Labs Text2SpeechFile SystemGolden QueryGoogle DriveGoogle PlacesGoogle SearchGoogle SerperGradioGraphQLHuggingFace Hub ToolsHuman as a toolIFTTT WebHooksLemon AgentMetaphor SearchNuclia UnderstandingOpenWeatherMapPubMedRequestsSceneXplainSearch ToolsSearchApiSearxNG SearchSerpAPITwilioWikipediaWolfram AlphaYahoo Finance NewsYouTubeZapier Natural Language ActionsAgents and toolkitsMemoryCallbacksChat loadersComponentsToolsFile SystemOn this pageFile SystemLangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them.Note: these tools are not recommended for use outside a sandboxed environment! First, we'll import the tools.from langchain.tools.file_management import ( ReadFileTool, CopyFileTool, DeleteFileTool, MoveFileTool, WriteFileTool, ListDirectoryTool,)from langchain.agents.agent_toolkits import FileManagementToolkitfrom tempfile import TemporaryDirectory# We'll make a temporary directory to avoid clutterworking_directory = TemporaryDirectory()The FileManagementToolkit‚ÄãIf you want to provide all the file tooling to your agent, it's easy to do so with the toolkit. We'll pass the temporary directory in as a root directory as a workspace for the LLM.It's recommended to always pass in a root directory, since without one, it's easy for the LLM to pollute the working directory, and without one, there isn't any validation against |
3,168 | straightforward prompt injection.toolkit = FileManagementToolkit( root_dir=str(working_directory.name)) # If you don't provide a root_dir, operations will default to the current working directorytoolkit.get_tools() [CopyFileTool(name='copy_file', description='Create a copy of a file in a specified location', args_schema=<class 'langchain.tools.file_management.copy.FileCopyInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), DeleteFileTool(name='file_delete', description='Delete a file', args_schema=<class 'langchain.tools.file_management.delete.FileDeleteInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), FileSearchTool(name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=<class 'langchain.tools.file_management.file_search.FileSearchInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), MoveFileTool(name='move_file', description='Move or rename a file from one location to another', args_schema=<class 'langchain.tools.file_management.move.FileMoveInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ReadFileTool(name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. ->: straightforward prompt injection.toolkit = FileManagementToolkit( root_dir=str(working_directory.name)) # If you don't provide a root_dir, operations will default to the current working directorytoolkit.get_tools() [CopyFileTool(name='copy_file', description='Create a copy of a file in a specified location', args_schema=<class 'langchain.tools.file_management.copy.FileCopyInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), DeleteFileTool(name='file_delete', description='Delete a file', args_schema=<class 'langchain.tools.file_management.delete.FileDeleteInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), FileSearchTool(name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=<class 'langchain.tools.file_management.file_search.FileSearchInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), MoveFileTool(name='move_file', description='Move or rename a file from one location to another', args_schema=<class 'langchain.tools.file_management.move.FileMoveInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ReadFileTool(name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, |
3,169 | object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]Selecting File System Tools‚ÄãIf you only want to select certain tools, you can pass them in as arguments when initializing the toolkit, or you can individually initialize the desired tools.tools = FileManagementToolkit( root_dir=str(working_directory.name), selected_tools=["read_file", "write_file", "list_directory"],).get_tools()tools [ReadFileTool(name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. ->: object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]Selecting File System Tools‚ÄãIf you only want to select certain tools, you can pass them in as arguments when initializing the toolkit, or you can individually initialize the desired tools.tools = FileManagementToolkit( root_dir=str(working_directory.name), selected_tools=["read_file", "write_file", "list_directory"],).get_tools()tools [ReadFileTool(name='read_file', description='Read file from disk', args_schema=<class 'langchain.tools.file_management.read.ReadFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=<class 'langchain.tools.file_management.write.WriteFileInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', |
3,170 | files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]read_tool, write_tool, list_tool = toolswrite_tool.run({"file_path": "example.txt", "text": "Hello World!"}) 'File written successfully to example.txt.'# List files in the working directorylist_tool.run({}) 'example.txt'PreviousEleven Labs Text2SpeechNextGolden QueryThe FileManagementToolkitSelecting File System ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. | LangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them. ->: files and directories in a specified folder', args_schema=<class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1156f4350>, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]read_tool, write_tool, list_tool = toolswrite_tool.run({"file_path": "example.txt", "text": "Hello World!"}) 'File written successfully to example.txt.'# List files in the working directorylist_tool.run({}) 'example.txt'PreviousEleven Labs Text2SpeechNextGolden QueryThe FileManagementToolkitSelecting File System ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,171 | BiliBili | ü¶úÔ∏èüîó Langchain | Bilibili is one of the most beloved long-form video sites in China. | Bilibili is one of the most beloved long-form video sites in China. ->: BiliBili | ü¶úÔ∏èüîó Langchain |
3,172 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Bilibili is one of the most beloved long-form video sites in China. | Bilibili is one of the most beloved long-form video sites in China. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,173 | and toolkitsMemoryCallbacksChat loadersProvidersMoreBiliBiliOn this pageBiliBiliBilibili is one of the most beloved long-form video sites in China.Installation and Setup​pip install bilibili-api-pythonDocument Loader​See a usage example.from langchain.document_loaders import BiliBiliLoaderPreviousBeautiful SoupNextNIBittensorInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Bilibili is one of the most beloved long-form video sites in China. | Bilibili is one of the most beloved long-form video sites in China. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreBiliBiliOn this pageBiliBiliBilibili is one of the most beloved long-form video sites in China.Installation and Setup​pip install bilibili-api-pythonDocument Loader​See a usage example.from langchain.document_loaders import BiliBiliLoaderPreviousBeautiful SoupNextNIBittensorInstallation and SetupDocument LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,174 | Graphsignal | ü¶úÔ∏èüîó Langchain | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. ->: Graphsignal | ü¶úÔ∏èüîó Langchain |
3,175 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,176 | and toolkitsMemoryCallbacksChat loadersProvidersMoreGraphsignalOn this pageGraphsignalThis page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.Installation and Setup​Install the Python library with pip install graphsignalCreate free Graphsignal account hereGet an API key and set it as an environment variable (GRAPHSIGNAL_API_KEY)Tracing and Monitoring​Graphsignal automatically instruments and starts tracing and monitoring chains. Traces and metrics are then available in your Graphsignal dashboards.Initialize the tracer by providing a deployment name:import graphsignalgraphsignal.configure(deployment='my-langchain-app-prod')To additionally trace any function or code, you can use a decorator or a context manager:@graphsignal.trace_functiondef handle_request(): chain.run("some initial text")with graphsignal.start_trace('my-chain'): chain.run("some initial text")Optionally, enable profiling to record function-level statistics for each trace.with graphsignal.start_trace( 'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)): chain.run("some initial text")See the Quick Start guide for complete setup instructions.PreviousGradientNextGrobidInstallation and SetupTracing and MonitoringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. | This page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreGraphsignalOn this pageGraphsignalThis page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.Installation and Setup​Install the Python library with pip install graphsignalCreate free Graphsignal account hereGet an API key and set it as an environment variable (GRAPHSIGNAL_API_KEY)Tracing and Monitoring​Graphsignal automatically instruments and starts tracing and monitoring chains. Traces and metrics are then available in your Graphsignal dashboards.Initialize the tracer by providing a deployment name:import graphsignalgraphsignal.configure(deployment='my-langchain-app-prod')To additionally trace any function or code, you can use a decorator or a context manager:@graphsignal.trace_functiondef handle_request(): chain.run("some initial text")with graphsignal.start_trace('my-chain'): chain.run("some initial text")Optionally, enable profiling to record function-level statistics for each trace.with graphsignal.start_trace( 'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)): chain.run("some initial text")See the Quick Start guide for complete setup instructions.PreviousGradientNextGrobidInstallation and SetupTracing and MonitoringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,177 | Flyte | ü¶úÔ∏èüîó Langchain | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: Flyte | ü¶úÔ∏èüîó Langchain |
3,178 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,179 | and toolkitsMemoryCallbacksChat loadersProvidersMoreFlyteOn this pageFlyteFlyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreFlyteOn this pageFlyteFlyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. |
3,180 | It is built for scalability and reproducibility, leveraging Kubernetes as its underlying platform.The purpose of this notebook is to demonstrate the integration of a FlyteCallback into your Flyte task, enabling you to effectively monitor and track your LangChain experiments.Installation & Setup‚ÄãInstall the Flytekit library by running the command pip install flytekit.Install the Flytekit-Envd plugin by running the command pip install flytekitplugins-envd.Install LangChain by running the command pip install langchain.Install Docker on your system.Flyte Tasks‚ÄãA Flyte task serves as the foundational building block of Flyte.
To execute LangChain experiments, you need to write Flyte tasks that define the specific steps and operations involved.NOTE: The getting started guide offers detailed, step-by-step instructions on installing Flyte locally and running your initial Flyte pipeline.First, import the necessary dependencies to support your LangChain experiments.import osfrom flytekit import ImageSpec, taskfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import FlyteCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import PromptTemplatefrom langchain.schema import HumanMessageSet up the necessary environment variables to utilize the OpenAI API and Serp API:# Set OpenAI API keyos.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"# Set Serp API keyos.environ["SERPAPI_API_KEY"] = "<your_serp_api_key>"Replace <your_openai_api_key> and <your_serp_api_key> with your respective API keys obtained from OpenAI and Serp API.To guarantee reproducibility of your pipelines, Flyte tasks are containerized. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: It is built for scalability and reproducibility, leveraging Kubernetes as its underlying platform.The purpose of this notebook is to demonstrate the integration of a FlyteCallback into your Flyte task, enabling you to effectively monitor and track your LangChain experiments.Installation & Setup‚ÄãInstall the Flytekit library by running the command pip install flytekit.Install the Flytekit-Envd plugin by running the command pip install flytekitplugins-envd.Install LangChain by running the command pip install langchain.Install Docker on your system.Flyte Tasks‚ÄãA Flyte task serves as the foundational building block of Flyte.
To execute LangChain experiments, you need to write Flyte tasks that define the specific steps and operations involved.NOTE: The getting started guide offers detailed, step-by-step instructions on installing Flyte locally and running your initial Flyte pipeline.First, import the necessary dependencies to support your LangChain experiments.import osfrom flytekit import ImageSpec, taskfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import FlyteCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import PromptTemplatefrom langchain.schema import HumanMessageSet up the necessary environment variables to utilize the OpenAI API and Serp API:# Set OpenAI API keyos.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"# Set Serp API keyos.environ["SERPAPI_API_KEY"] = "<your_serp_api_key>"Replace <your_openai_api_key> and <your_serp_api_key> with your respective API keys obtained from OpenAI and Serp API.To guarantee reproducibility of your pipelines, Flyte tasks are containerized. |
3,181 | Each Flyte task must be associated with an image, which can either be shared across the entire Flyte workflow or provided separately for each task.To streamline the process of supplying the required dependencies for each Flyte task, you can initialize an ImageSpec object.
This approach automatically triggers a Docker build, alleviating the need for users to manually create a Docker image.custom_image = ImageSpec( name="langchain-flyte", packages=[ "langchain", "openai", "spacy", "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0.tar.gz", "textstat", "google-search-results", ], registry="<your-registry>",)You have the flexibility to push the Docker image to a registry of your preference. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: Each Flyte task must be associated with an image, which can either be shared across the entire Flyte workflow or provided separately for each task.To streamline the process of supplying the required dependencies for each Flyte task, you can initialize an ImageSpec object.
This approach automatically triggers a Docker build, alleviating the need for users to manually create a Docker image.custom_image = ImageSpec( name="langchain-flyte", packages=[ "langchain", "openai", "spacy", "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0.tar.gz", "textstat", "google-search-results", ], registry="<your-registry>",)You have the flexibility to push the Docker image to a registry of your preference. |
3,182 | Docker Hub or GitHub Container Registry (GHCR) is a convenient option to begin with.Once you have selected a registry, you can proceed to create Flyte tasks that log the LangChain metrics to Flyte Deck.The following examples demonstrate tasks related to OpenAI LLM, chains and agent with tools:LLM‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_llm() -> str: llm = ChatOpenAI( model_name="gpt-3.5-turbo", temperature=0.2, callbacks=[FlyteCallbackHandler()], ) return llm([HumanMessage(content="Tell me a joke")]).contentChain‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_chain() -> list[dict[str, str]]: template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:""" llm = ChatOpenAI( model_name="gpt-3.5-turbo", temperature=0, callbacks=[FlyteCallbackHandler()], ) prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain( llm=llm, prompt=prompt_template, callbacks=[FlyteCallbackHandler()] ) test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, ] return synopsis_chain.apply(test_prompts)Agent‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_agent() -> str: llm = OpenAI( model_name="gpt-3.5-turbo", temperature=0, callbacks=[FlyteCallbackHandler()], ) tools = load_tools( ["serpapi", "llm-math"], llm=llm, callbacks=[FlyteCallbackHandler()] ) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=[FlyteCallbackHandler()], verbose=True, ) return agent.run( "Who is Leonardo DiCaprio's girlfriend? Could you calculate her current age and raise it to the power of | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: Docker Hub or GitHub Container Registry (GHCR) is a convenient option to begin with.Once you have selected a registry, you can proceed to create Flyte tasks that log the LangChain metrics to Flyte Deck.The following examples demonstrate tasks related to OpenAI LLM, chains and agent with tools:LLM‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_llm() -> str: llm = ChatOpenAI( model_name="gpt-3.5-turbo", temperature=0.2, callbacks=[FlyteCallbackHandler()], ) return llm([HumanMessage(content="Tell me a joke")]).contentChain‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_chain() -> list[dict[str, str]]: template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:""" llm = ChatOpenAI( model_name="gpt-3.5-turbo", temperature=0, callbacks=[FlyteCallbackHandler()], ) prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain( llm=llm, prompt=prompt_template, callbacks=[FlyteCallbackHandler()] ) test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, ] return synopsis_chain.apply(test_prompts)Agent‚Äã@task(disable_deck=False, container_image=custom_image)def langchain_agent() -> str: llm = OpenAI( model_name="gpt-3.5-turbo", temperature=0, callbacks=[FlyteCallbackHandler()], ) tools = load_tools( ["serpapi", "llm-math"], llm=llm, callbacks=[FlyteCallbackHandler()] ) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=[FlyteCallbackHandler()], verbose=True, ) return agent.run( "Who is Leonardo DiCaprio's girlfriend? Could you calculate her current age and raise it to the power of |
3,183 | her current age and raise it to the power of 0.43?" )These tasks serve as a starting point for running your LangChain experiments within Flyte.Execute the Flyte Tasks on Kubernetes​To execute the Flyte tasks on the configured Flyte backend, use the following command:pyflyte run --image <your-image> langchain_flyte.py langchain_llmThis command will initiate the execution of the langchain_llm task on the Flyte backend. You can trigger the remaining two tasks in a similar manner.The metrics will be displayed on the Flyte UI as follows:PreviousFireworksNextForefrontAIInstallation & SetupFlyte TasksLLMChainAgentExecute the Flyte Tasks on KubernetesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. | Flyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines. ->: her current age and raise it to the power of 0.43?" )These tasks serve as a starting point for running your LangChain experiments within Flyte.Execute the Flyte Tasks on Kubernetes​To execute the Flyte tasks on the configured Flyte backend, use the following command:pyflyte run --image <your-image> langchain_flyte.py langchain_llmThis command will initiate the execution of the langchain_llm task on the Flyte backend. You can trigger the remaining two tasks in a similar manner.The metrics will be displayed on the Flyte UI as follows:PreviousFireworksNextForefrontAIInstallation & SetupFlyte TasksLLMChainAgentExecute the Flyte Tasks on KubernetesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,184 | OpenLLM | ü¶úÔ∏èüîó Langchain | This page demonstrates how to use OpenLLM | This page demonstrates how to use OpenLLM ->: OpenLLM | ü¶úÔ∏èüîó Langchain |
3,185 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page demonstrates how to use OpenLLM | This page demonstrates how to use OpenLLM ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,186 | and toolkitsMemoryCallbacksChat loadersProvidersMoreOpenLLMOn this pageOpenLLMThis page demonstrates how to use OpenLLM | This page demonstrates how to use OpenLLM | This page demonstrates how to use OpenLLM ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreOpenLLMOn this pageOpenLLMThis page demonstrates how to use OpenLLM |
3,187 | with LangChain.OpenLLM is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation and Setup‚ÄãInstall the OpenLLM package via PyPI:pip install openllmLLM‚ÄãOpenLLM supports a wide range of open-source LLMs as well as serving users' own
fine-tuned LLMs. Use openllm model command to see all available models that
are pre-optimized for OpenLLM.Wrappers‚ÄãThere is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:from langchain.llms import OpenLLMWrapper for OpenLLM server‚ÄãThis wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.To try it out locally, start an OpenLLM server:openllm start flan-t5Wrapper usage:from langchain.llms import OpenLLMllm = OpenLLM(server_url='http://localhost:3000')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Wrapper for Local Inference‚ÄãYou can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.from langchain.llms import OpenLLMllm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Usage‚ÄãFor a more detailed walkthrough of the OpenLLM Wrapper, see the
example notebookPreviousObsidianNextOpenSearchInstallation and SetupLLMWrappersWrapper for OpenLLM serverWrapper for Local InferenceUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page demonstrates how to use OpenLLM | This page demonstrates how to use OpenLLM ->: with LangChain.OpenLLM is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation and Setup‚ÄãInstall the OpenLLM package via PyPI:pip install openllmLLM‚ÄãOpenLLM supports a wide range of open-source LLMs as well as serving users' own
fine-tuned LLMs. Use openllm model command to see all available models that
are pre-optimized for OpenLLM.Wrappers‚ÄãThere is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:from langchain.llms import OpenLLMWrapper for OpenLLM server‚ÄãThis wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.To try it out locally, start an OpenLLM server:openllm start flan-t5Wrapper usage:from langchain.llms import OpenLLMllm = OpenLLM(server_url='http://localhost:3000')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Wrapper for Local Inference‚ÄãYou can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.from langchain.llms import OpenLLMllm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")Usage‚ÄãFor a more detailed walkthrough of the OpenLLM Wrapper, see the
example notebookPreviousObsidianNextOpenSearchInstallation and SetupLLMWrappersWrapper for OpenLLM serverWrapper for Local InferenceUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,188 | Hugging Face | ü¶úÔ∏èüîó Langchain | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. ->: Hugging Face | ü¶úÔ∏èüîó Langchain |
3,189 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,190 | and toolkitsMemoryCallbacksChat loadersProvidersMoreHugging FaceOn this pageHugging FaceThis page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreHugging FaceOn this pageHugging FaceThis page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. |
3,191 | It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.Installation and Setup‚ÄãIf you want to work with the Hugging Face Hub:Install the Hub client library with pip install huggingface_hubCreate a Hugging Face account (it's free!)Create an access token and set it as an environment variable (HUGGINGFACEHUB_API_TOKEN)If you want work with the Hugging Face Python libraries:Install pip install transformers for working with models and tokenizersInstall pip install datasets for working with datasetsWrappers‚ÄãLLM‚ÄãThere exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: text2text-generation, text-generationTo use the local pipeline wrapper:from langchain.llms import HuggingFacePipelineTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.llms import HuggingFaceHubFor a more detailed walkthrough of the Hugging Face Hub wrapper, see this notebookEmbeddings‚ÄãThere exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for sentence-transformers models.To use the local pipeline wrapper:from langchain.embeddings import HuggingFaceEmbeddingsTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.embeddings import HuggingFaceHubEmbeddingsFor a more detailed walkthrough of this, see this notebookTokenizer‚ÄãThere are several places you can use tokenizers available through the transformers package. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. ->: It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.Installation and Setup‚ÄãIf you want to work with the Hugging Face Hub:Install the Hub client library with pip install huggingface_hubCreate a Hugging Face account (it's free!)Create an access token and set it as an environment variable (HUGGINGFACEHUB_API_TOKEN)If you want work with the Hugging Face Python libraries:Install pip install transformers for working with models and tokenizersInstall pip install datasets for working with datasetsWrappers‚ÄãLLM‚ÄãThere exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: text2text-generation, text-generationTo use the local pipeline wrapper:from langchain.llms import HuggingFacePipelineTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.llms import HuggingFaceHubFor a more detailed walkthrough of the Hugging Face Hub wrapper, see this notebookEmbeddings‚ÄãThere exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for sentence-transformers models.To use the local pipeline wrapper:from langchain.embeddings import HuggingFaceEmbeddingsTo use a the wrapper for a model hosted on Hugging Face Hub:from langchain.embeddings import HuggingFaceHubEmbeddingsFor a more detailed walkthrough of this, see this notebookTokenizer‚ÄãThere are several places you can use tokenizers available through the transformers package. |
3,192 | By default, it is used to count tokens for all LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_huggingface_tokenizer(...)For a more detailed walkthrough of this, see this notebookDatasets​The Hugging Face Hub has lots of great datasets that can be used to evaluate your LLM chains.For a detailed walkthrough of how to use them to do so, see this notebookPreviousHTML to textNextiFixitInstallation and SetupWrappersLLMEmbeddingsTokenizerDatasetsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. | This page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain. ->: By default, it is used to count tokens for all LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_huggingface_tokenizer(...)For a more detailed walkthrough of this, see this notebookDatasets​The Hugging Face Hub has lots of great datasets that can be used to evaluate your LLM chains.For a detailed walkthrough of how to use them to do so, see this notebookPreviousHTML to textNextiFixitInstallation and SetupWrappersLLMEmbeddingsTokenizerDatasetsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,193 | Dingo | ü¶úÔ∏èüîó Langchain | This page covers how to use the Dingo ecosystem within LangChain. | This page covers how to use the Dingo ecosystem within LangChain. ->: Dingo | ü¶úÔ∏èüîó Langchain |
3,194 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Dingo ecosystem within LangChain. | This page covers how to use the Dingo ecosystem within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,195 | and toolkitsMemoryCallbacksChat loadersProvidersMoreDingoOn this pageDingoThis page covers how to use the Dingo ecosystem within LangChain. | This page covers how to use the Dingo ecosystem within LangChain. | This page covers how to use the Dingo ecosystem within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreDingoOn this pageDingoThis page covers how to use the Dingo ecosystem within LangChain. |
3,196 | It is broken into two parts: installation and setup, and then references to specific Dingo wrappers.Installation and Setup‚ÄãInstall the Python SDK with pip install dingodbVectorStore‚ÄãThere exists a wrapper around Dingo indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import DingoFor a more detailed walkthrough of the Dingo wrapper, see this notebookPreviousDiffbotNextDiscordInstallation and SetupVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Dingo ecosystem within LangChain. | This page covers how to use the Dingo ecosystem within LangChain. ->: It is broken into two parts: installation and setup, and then references to specific Dingo wrappers.Installation and Setup​Install the Python SDK with pip install dingodbVectorStore​There exists a wrapper around Dingo indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import DingoFor a more detailed walkthrough of the Dingo wrapper, see this notebookPreviousDiffbotNextDiscordInstallation and SetupVectorStoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
3,197 | Helicone | ü¶úÔ∏èüîó Langchain | This page covers how to use the Helicone ecosystem within LangChain. | This page covers how to use the Helicone ecosystem within LangChain. ->: Helicone | ü¶úÔ∏èüîó Langchain |
3,198 | Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat | This page covers how to use the Helicone ecosystem within LangChain. | This page covers how to use the Helicone ecosystem within LangChain. ->: Skip to main contentü¶úÔ∏èüîó LangChainDocsUse casesIntegrationsAPICommunityChat our docsLangSmithJS/TS DocsSearchCTRLKProvidersAnthropicAWSGoogleMicrosoftOpenAIMoreActiveloop Deep LakeAI21 LabsAimAINetworkAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasAwaDBAWS DynamoDBAZLyricsBagelDBBananaBasetenBeamBeautiful SoupBiliBiliNIBittensorBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLClickHouseCnosDBCohereCollege ConfidentialCometConfident AIConfluenceC TransformersDashVectorDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeepSparseDiffbotDingoDiscordDocArrayDoctranDocugamiDuckDBElasticsearchEpsillaEverNoteFacebook ChatFacebook FaissFigmaFireworksFlyteForefrontAIGitGitBookGoldenGoogle Document AIGoogle SerperGooseAIGPT4AllGradientGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHTML to textHugging FaceiFixitIMSDbInfinoJavelin AI GatewayJinaKonkoLanceDBLangChain Decorators ‚ú®Llama.cppLog10MarqoMediaWikiDumpMeilisearchMetalMilvusMinimaxMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMongoDB AtlasMotherduckMot√∂rheadMyScaleNeo4jNLPCloudNotion DBNucliaObsidianOpenLLMOpenSearchOpenWeatherMapPetalsPostgres EmbeddingPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerprovidersPsychicPubMedQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4ScaNNSearchApiSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeSupabase (Postgres)NebulaTairTelegramTencentVectorDBTensorFlow DatasetsTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredUpstash RedisUSearchVearchVectaraVespaWandB TracingWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterXataXorbits Inference (Xinference)YandexYeager.aiYouTubeZepZillizComponentsLLMsChat modelsDocument loadersDocument transformersText embedding modelsVector storesRetrieversToolsAgents and toolkitsMemoryCallbacksChat |
3,199 | and toolkitsMemoryCallbacksChat loadersProvidersMoreHeliconeOn this pageHeliconeThis page covers how to use the Helicone ecosystem within LangChain.What is Helicone?​Helicone is an open-source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.Quick start​With your LangChain environment you can just add the following parameter.export OPENAI_API_BASE="https://oai.hconeai.com/v1"Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.How to enable Helicone caching​from langchain.llms import OpenAIimport openaiopenai.api_base = "https://oai.hconeai.com/v1"llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})text = "What is a helicone?"print(llm(text))Helicone caching docsHow to use Helicone custom properties​from langchain.llms import OpenAIimport openaiopenai.api_base = "https://oai.hconeai.com/v1"llm = OpenAI(temperature=0.9, headers={ "Helicone-Property-Session": "24", "Helicone-Property-Conversation": "support_issue_2", "Helicone-Property-App": "mobile", })text = "What is a helicone?"print(llm(text))Helicone property docsPreviousHazy ResearchNextHologresWhat is Helicone?Quick startHow to enable Helicone cachingHow to use Helicone custom propertiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | This page covers how to use the Helicone ecosystem within LangChain. | This page covers how to use the Helicone ecosystem within LangChain. ->: and toolkitsMemoryCallbacksChat loadersProvidersMoreHeliconeOn this pageHeliconeThis page covers how to use the Helicone ecosystem within LangChain.What is Helicone?​Helicone is an open-source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.Quick start​With your LangChain environment you can just add the following parameter.export OPENAI_API_BASE="https://oai.hconeai.com/v1"Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.How to enable Helicone caching​from langchain.llms import OpenAIimport openaiopenai.api_base = "https://oai.hconeai.com/v1"llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})text = "What is a helicone?"print(llm(text))Helicone caching docsHow to use Helicone custom properties​from langchain.llms import OpenAIimport openaiopenai.api_base = "https://oai.hconeai.com/v1"llm = OpenAI(temperature=0.9, headers={ "Helicone-Property-Session": "24", "Helicone-Property-Conversation": "support_issue_2", "Helicone-Property-App": "mobile", })text = "What is a helicone?"print(llm(text))Helicone property docsPreviousHazy ResearchNextHologresWhat is Helicone?Quick startHow to enable Helicone cachingHow to use Helicone custom propertiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.