File size: 14,802 Bytes
2e44b1a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
# AWS

The `LangChain` integrations related to [Amazon AWS](https://aws.amazon.com/) platform.

First-party AWS integrations are available in the `langchain_aws` package.

```bash
pip install langchain-aws
```

And there are also some community integrations available in the `langchain_community` package with the `boto3` optional dependency.

```bash
pip install langchain-community boto3
```

## Chat models

### Bedrock Chat

>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of 
> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, 
> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to 
> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, 
> you can easily experiment with and evaluate top FMs for your use case, privately customize them with 
> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build 
> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is 
> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy 
> generative AI capabilities into your applications using the AWS services you are already familiar with.

See a [usage example](/docs/integrations/chat/bedrock).

```python
from langchain_aws import ChatBedrock
```

### Bedrock Converse
AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.

We recommend using `ChatBedrockConverse` for users who do not need to use custom models. See the [docs](/docs/integrations/chat/bedrock/#bedrock-converse-api) and [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) for more detail.

```python
from langchain_aws import ChatBedrockConverse
```

## LLMs

### Bedrock
 
See a [usage example](/docs/integrations/llms/bedrock).

```python
from langchain_aws import BedrockLLM
```

### Amazon API Gateway

>[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for 
> developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" 
> for applications to access data, business logic, or functionality from your backend services. Using 
> `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication 
> applications. `API Gateway` supports containerized and serverless workloads, as well as web applications.
> 
> `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of 
> concurrent API calls, including traffic management, CORS support, authorization and access control, 
> throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs. 
> You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway` 
> tiered pricing model, you can reduce your cost as your API usage scales.

See a [usage example](/docs/integrations/llms/amazon_api_gateway).

```python
from langchain_community.llms import AmazonAPIGateway
```

### SageMaker Endpoint

>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy 
> machine learning (ML) models with fully managed infrastructure, tools, and workflows.

We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`.

See a [usage example](/docs/integrations/llms/sagemaker).

```python
from langchain_aws import SagemakerEndpoint
```

## Embedding Models

### Bedrock

See a [usage example](/docs/integrations/text_embedding/bedrock).
```python
from langchain_aws import BedrockEmbeddings
```

### SageMaker Endpoint

See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint).
```python
from langchain_community.embeddings import SagemakerEndpointEmbeddings
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
```

## Document loaders

### AWS S3 Directory and File

>[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
> is an object storage service.
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)

See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).

See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).

```python
from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader
```

### Amazon Textract

>[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine 
> learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.

See a [usage example](/docs/integrations/document_loaders/amazon_textract).

```python
from langchain_community.document_loaders import AmazonTextractPDFLoader
```

### Amazon Athena

>[Amazon Athena](https://aws.amazon.com/athena/) is a serverless, interactive analytics service built
>on open-source frameworks, supporting open-table and file formats.

See a [usage example](/docs/integrations/document_loaders/athena).

```python
from langchain_community.document_loaders.athena import AthenaLoader
```

### AWS Glue

>The [AWS Glue Data Catalog](https://docs.aws.amazon.com/en_en/glue/latest/dg/catalog-and-crawler.html) is a centralized metadata 
> repository that allows you to manage, access, and share metadata about 
> your data stored in AWS. It acts as a metadata store for your data assets, 
> enabling various AWS services and your applications to query and connect 
> to the data they need efficiently.

See a [usage example](/docs/integrations/document_loaders/glue_catalog).

```python
from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader
```

## Vector stores

### Amazon OpenSearch Service

> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs 
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is 
> an open source, 
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the 
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as 
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.

We need to install several python libraries.

```bash
pip install boto3 requests requests-aws4auth
```

See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).

```python
from langchain_community.vectorstores import OpenSearchVectorSearch
```

### Amazon DocumentDB Vector Search

>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.

#### Installation and Setup

See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).

We need to install the `pymongo` python package.

```bash
pip install pymongo
```

#### Deploy DocumentDB on AWS

[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.

AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).

See a [usage example](/docs/integrations/vectorstores/documentdb).

```python
from langchain_community.vectorstores import DocumentDBVectorSearch
```
### Amazon MemoryDB 
[Amazon MemoryDB](https://aws.amazon.com/memorydb/) is a durable, in-memory database service that delivers ultra-fast performance. MemoryDB is compatible with Redis OSS, a popular open source data store, 
enabling you to quickly build applications using the same flexible and friendly Redis OSS APIs, and commands that they already use today. 

InMemoryVectorStore class provides a vectorstore to connect with Amazon MemoryDB.

```python
from langchain_aws.vectorstores.inmemorydb import InMemoryVectorStore

vds = InMemoryVectorStore.from_documents(
            chunks,
            embeddings,
            redis_url="rediss://cluster_endpoint:6379/ssl=True ssl_cert_reqs=none",
            vector_schema=vector_schema,
            index_name=INDEX_NAME,
        )
```
See a [usage example](/docs/integrations/vectorstores/memorydb).

## Retrievers

### Amazon Kendra

> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service 
> provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine 
> learning algorithms to enable powerful search capabilities across various data sources within an organization. 
> `Kendra` is designed to help users find the information they need quickly and accurately, 
> improving productivity and decision-making.

> With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases, 
> manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and 
> contextual meanings to provide highly relevant search results.

We need to install the `langchain-aws` library.

```bash
pip install langchain-aws
```

See a [usage example](/docs/integrations/retrievers/amazon_kendra_retriever).

```python
from langchain_aws import AmazonKendraRetriever
```

### Amazon Bedrock (Knowledge Bases)

> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an 
> `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your 
> private data to customize foundation model response.

We need to install the `langchain-aws` library.

```bash
pip install langchain-aws
```

See a [usage example](/docs/integrations/retrievers/bedrock).

```python
from langchain_aws import AmazonKnowledgeBasesRetriever
```

## Tools

### AWS Lambda

>[`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by 
> `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without 
> provisioning or managing servers. This serverless architecture enables you to focus on writing and 
> deploying code, while AWS automatically takes care of scaling, patching, and managing the 
> infrastructure required to run your applications.

We need to install `boto3` python library.

```bash
pip install boto3
```

See a [usage example](/docs/integrations/tools/awslambda).

## Memory

### AWS DynamoDB

>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html) 
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
 
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). 

We need to install the `boto3` library.

```bash
pip install boto3
```

See a [usage example](/docs/integrations/memory/aws_dynamodb).

```python
from langchain_community.chat_message_histories import DynamoDBChatMessageHistory
```

## Graphs

### Amazon Neptune

>[Amazon Neptune](https://aws.amazon.com/neptune/)
> is a high-performance graph analytics and serverless database for superior scalability and availability.

For the Cypher and SPARQL integrations below, we need to install the `langchain-aws` library.

```bash
pip install langchain-aws
```

### Amazon Neptune with Cypher

See a [usage example](/docs/integrations/graphs/amazon_neptune_open_cypher).

```python
from langchain_aws.graphs import NeptuneGraph
from langchain_aws.graphs import NeptuneAnalyticsGraph
from langchain_aws.chains import create_neptune_opencypher_qa_chain
```

### Amazon Neptune with SPARQL

See a [usage example](/docs/integrations/graphs/amazon_neptune_sparql).

```python
from langchain_aws.graphs import NeptuneRdfGraph
from langchain_aws.chains import create_neptune_sparql_qa_chain
```



## Callbacks

### Bedrock token usage

```python
from langchain_community.callbacks.bedrock_anthropic_callback import BedrockAnthropicTokenUsageCallbackHandler
```

### SageMaker Tracking

>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly 
> and easily build, train and deploy machine learning (ML) models.

>[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability 
> of `Amazon SageMaker` that lets you organize, track, 
> compare and evaluate ML experiments and model versions.
 
We need to install several python libraries.

```bash
pip install google-search-results sagemaker
```

See a [usage example](/docs/integrations/callbacks/sagemaker_tracking).

```python
from langchain_community.callbacks import SageMakerCallbackHandler
```

## Chains

### Amazon Comprehend Moderation Chain

>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that 
> uses machine learning to uncover valuable insights and connections in text.


We need to install the `boto3` and `nltk` libraries.

```bash
pip install boto3 nltk
```

See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/amazon_comprehend_chain/).

```python
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
```