File size: 2,938 Bytes
8dc9a1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
# Couchbase

>[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database 
> that delivers unmatched versatility, performance, scalability, and financial value 
> for all of your cloud, mobile, AI, and edge computing applications.

## Installation and Setup

We have to install the `langchain-couchbase` package.

```bash
pip install langchain-couchbase
```

## Vector Store

See a [usage example](/docs/integrations/vectorstores/couchbase).

```python
from langchain_couchbase import CouchbaseVectorStore
```

## Document loader

See a [usage example](/docs/integrations/document_loaders/couchbase).

```python
from langchain_community.document_loaders.couchbase import CouchbaseLoader
```

## LLM Caches

### CouchbaseCache
Use Couchbase as a cache for prompts and responses.

See a [usage example](/docs/integrations/llm_caching/#couchbase-cache).

To import this cache:
```python
from langchain_couchbase.cache import CouchbaseCache
```

To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache

cluster = couchbase_cluster_connection_object

set_llm_cache(
    CouchbaseCache(
        cluster=cluster,
        bucket_name=BUCKET_NAME,
        scope_name=SCOPE_NAME,
        collection_name=COLLECTION_NAME,
    )
)
```


### CouchbaseSemanticCache
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore.
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index.

See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache).

To import this cache:
```python
from langchain_couchbase.cache import CouchbaseSemanticCache
```

To use this cache with your LLMs:
```python
from langchain_core.globals import set_llm_cache

# use any embedding provider...
from langchain_openai.Embeddings import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()
cluster = couchbase_cluster_connection_object

set_llm_cache(
    CouchbaseSemanticCache(
        cluster=cluster,
        embedding = embeddings,
        bucket_name=BUCKET_NAME,
        scope_name=SCOPE_NAME,
        collection_name=COLLECTION_NAME,
        index_name=INDEX_NAME,
    )
)
```

## Chat Message History
Use Couchbase as the storage for your chat messages.

See a [usage example](/docs/integrations/memory/couchbase_chat_message_history).

To use the chat message history in your applications:
```python
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory

message_history = CouchbaseChatMessageHistory(
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    session_id="test-session",
)

message_history.add_user_message("hi!")
```