File size: 3,122 Bytes
5184c29
 
 
026aeba
5184c29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
import numpy as np
from transformers import pipeline

def retrieve_top_k_documents(vector_store, query, top_k=5):
    documents = vector_store.similarity_search(query, k=top_k)
    documents = rerank_documents(query, documents)
    return documents 

# Reranking: Cross-Encoder for refining top-k results
def rerank_documents(query, documents, reranker_model_name="cross-encoder/ms-marco-electra-base"):
    """
    Re-rank documents using a cross-encoder model.

    Parameters:
        query (str): The user's query.
        documents (list): List of LangChain Document objects.
        reranker_model_name (str): Hugging Face model name for re-ranking.

    Returns:
        list: Re-ranked list of Document objects with updated scores.
    """
    # Initialize the cross-encoder model
    reranker = pipeline("text-classification", model=reranker_model_name, return_all_scores=False)

    # Pair the query with each document's text
    rerank_inputs = [{"text": query, "text_pair": doc.page_content} for doc in documents]

    # Get relevance scores for each query-document pair
    scores = reranker(rerank_inputs)

    # Attach the new scores to the documents
    for doc, score in zip(documents, scores):
        doc.metadata["rerank_score"] = score["score"]  # Add score to document metadata

    # Sort documents by the rerank_score in descending order
    documents = sorted(documents, key=lambda x: x.metadata.get("rerank_score", 0), reverse=True)
    return documents


# Query Handling: Retrieve top-k candidates using FAISS with IVF index not used only for learning
def retrieve_top_k_documents_manual(vector_store, query, top_k=5):
    """
    Retrieve top-k documents using FAISS index and optionally rerank them.

    Parameters:
        vector_store (FAISS): The vector store containing the FAISS index and docstore.
        query (str): The user's query string.
        top_k (int): The number of top results to retrieve.
        reranker_model_name (str): The Hugging Face model name for cross-encoder reranking.

    Returns:
        list: Top-k retrieved and reranked documents.
    """
    # Encode the query into a dense vector
    embedding_model = vector_store.embedding_function
    query_vector = embedding_model.embed_query(query)  # Encode the query
    query_vector = np.array([query_vector]).astype('float32')
    
    # Search the FAISS index for top_k results
    distances, indices = vector_store.index.search(query_vector, top_k)

    # Retrieve documents from the docstore
    documents = []
    for idx in indices.flatten():
        if idx == -1:  # FAISS can return -1 for invalid indices
            continue
        doc_id = vector_store.index_to_docstore_id[idx]

        # Access the internal dictionary of InMemoryDocstore
        internal_docstore = getattr(vector_store.docstore, "_dict", None)
        if internal_docstore and doc_id in internal_docstore:  # Check if doc_id exists
            document = internal_docstore[doc_id]
            documents.append(document)

    # Rerank the documents 
    documents = rerank_documents(query, documents)
    
    return documents