|
|
|
|
|
>[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database |
|
> that delivers unmatched versatility, performance, scalability, and financial value |
|
> for all of your cloud, mobile, AI, and edge computing applications. |
|
|
|
|
|
|
|
We have to install the `langchain-couchbase` package. |
|
|
|
```bash |
|
pip install langchain-couchbase |
|
``` |
|
|
|
|
|
|
|
See a [usage example](/docs/integrations/vectorstores/couchbase). |
|
|
|
```python |
|
from langchain_couchbase import CouchbaseVectorStore |
|
``` |
|
|
|
|
|
|
|
See a [usage example](/docs/integrations/document_loaders/couchbase). |
|
|
|
```python |
|
from langchain_community.document_loaders.couchbase import CouchbaseLoader |
|
``` |
|
|
|
|
|
|
|
|
|
Use Couchbase as a cache for prompts and responses. |
|
|
|
See a [usage example](/docs/integrations/llm_caching/ |
|
|
|
To import this cache: |
|
```python |
|
from langchain_couchbase.cache import CouchbaseCache |
|
``` |
|
|
|
To use this cache with your LLMs: |
|
```python |
|
from langchain_core.globals import set_llm_cache |
|
|
|
cluster = couchbase_cluster_connection_object |
|
|
|
set_llm_cache( |
|
CouchbaseCache( |
|
cluster=cluster, |
|
bucket_name=BUCKET_NAME, |
|
scope_name=SCOPE_NAME, |
|
collection_name=COLLECTION_NAME, |
|
) |
|
) |
|
``` |
|
|
|
|
|
|
|
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. |
|
The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index. |
|
|
|
See a [usage example](/docs/integrations/llm_caching/ |
|
|
|
To import this cache: |
|
```python |
|
from langchain_couchbase.cache import CouchbaseSemanticCache |
|
``` |
|
|
|
To use this cache with your LLMs: |
|
```python |
|
from langchain_core.globals import set_llm_cache |
|
|
|
# use any embedding provider... |
|
from langchain_openai.Embeddings import OpenAIEmbeddings |
|
|
|
embeddings = OpenAIEmbeddings() |
|
cluster = couchbase_cluster_connection_object |
|
|
|
set_llm_cache( |
|
CouchbaseSemanticCache( |
|
cluster=cluster, |
|
embedding = embeddings, |
|
bucket_name=BUCKET_NAME, |
|
scope_name=SCOPE_NAME, |
|
collection_name=COLLECTION_NAME, |
|
index_name=INDEX_NAME, |
|
) |
|
) |
|
``` |
|
|
|
|
|
Use Couchbase as the storage for your chat messages. |
|
|
|
See a [usage example](/docs/integrations/memory/couchbase_chat_message_history). |
|
|
|
To use the chat message history in your applications: |
|
```python |
|
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory |
|
|
|
message_history = CouchbaseChatMessageHistory( |
|
cluster=cluster, |
|
bucket_name=BUCKET_NAME, |
|
scope_name=SCOPE_NAME, |
|
collection_name=COLLECTION_NAME, |
|
session_id="test-session", |
|
) |
|
|
|
message_history.add_user_message("hi!") |
|
``` |