patronmoses commited on
Commit
3b402d9
·
verified ·
1 Parent(s): 07d413c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -14
README.md CHANGED
@@ -1,14 +1,149 @@
1
- ---
2
- title: RAG BITS Tutor
3
- emoji: 🏃
4
- colorFrom: green
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 5.31.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- short_description: RAG Business IT Strategie Tutor
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RAG Study Tutor for Business IT Strategy
2
+
3
+ **Author:** Laurel Mayer
4
+ **Module:** AI Applications (w.3KIA) - Project 3
5
+
6
+ ## 1. Project Description
7
+
8
+ This project implements a Retrieval Augmented Generation (RAG) application designed to act as a "Study Tutor" for the subject "Business IT Strategy." The primary goal is to enable users to ask questions about specific course content and receive well-founded, context-based answers derived from the provided lecture materials and case studies. The application integrates a retrieval component for searching relevant text passages with a Large Language Model (LLM) for generating the final answers.
9
+
10
+ ### Name & URL
11
+
12
+ | Name | URL |
13
+ |-----------------------|------------------------------------------------------------------------------------------|
14
+ | Code | [GitHub Repository](https://github.com/patronlaurel/RAG-BITS-Tutor) |
15
+ | Embedding Model Page | [Sahajtomar/German-semantic](https://huggingface.co/Sahajtomar/German-semantic) |
16
+ | LLM Provider (Groq) | [Groq](https://groq.com/) |
17
+ | Jupyter Notebook | [main_project.ipynb](main_project.ipynb) |
18
+ | FAISS Index & Chunks | [/faiss_index_bits/](faiss_index_bits/) |
19
+
20
+
21
+ ## 2. Data Sources
22
+
23
+ The knowledge base for the RAG Tutor consists of:
24
+
25
+ | Data Source | Description |
26
+ |---------------------------------------------------|------------------------------------------------------------------------------------------------------------|
27
+ | Own course materials (Lecture PDFs) | 13 PDF documents comprising lecture notes and case studies (including solutions) for the "Business IT Strategy" course. |
28
+ | (Total extracted text volume: approx. 221,049 characters) | The files are located in the `data/` folder of this repository. |
29
+
30
+ ## 3. RAG Improvements
31
+
32
+ To enhance the RAG system's performance, the following adaptation was implemented:
33
+
34
+ | Improvement | Description |
35
+ |-------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
36
+ | `Query Expansion` (using LLM) | The original user query is sent to an LLM (`llama3-8b-8192` via Groq) to generate 2-3 alternative formulations or relevant keywords. These expanded queries are then additionally used for retrieval to create a broader contextual base for final answer generation. The implementation and evaluation of this method are detailed in Section 5 of the Jupyter Notebook (`main_project.ipynb`). |
37
+ | Other Potential Improvements | For this project, the focus was on implementing and evaluating Query Expansion. Further potential improvements and adaptation mechanisms (e.g., re-ranking of search results, hybrid search) are discussed in the "Conclusion and Outlook" section of this document and in the notebook. |
38
+
39
+ ## 4. Chunking
40
+
41
+ ### Data Chunking Method
42
+
43
+ The choice of chunking strategy is crucial for retrieval quality, as it determines how context is divided and fed to the embedding model. For this project, the text extracted from the PDFs was chunked as follows:
44
+
45
+ | Type of Chunking | Configuration | Result (Number of Chunks) |
46
+ |----------------------------------|-------------------------------------------------|---------------------------|
47
+ | **`RecursiveCharacterTextSplitter` (Langchain) - Chosen Method** | Chunk Size: 1500 characters, Overlap: 200 characters | 203 |
48
+
49
+ **Reasoning for the chosen method:**
50
+ The `RecursiveCharacterTextSplitter` was selected because it attempts to maintain semantically coherent blocks by recursively splitting at various separators (like paragraphs, sentences, etc.). A `chunk_size` of 1500 characters with an `overlap` of 200 characters was chosen as a good starting point. The goal was to obtain chunks that contain sufficient context for understanding but are not so large as to exceed the maximum input length of embedding models or introduce too much noise for specific queries. The resulting 203 chunks represented a manageable quantity for further processing.
51
+
52
+ **Alternatively Considered Chunking Approaches:**
53
+
54
+ | Type of Chunking | Hypothetical Configuration/Consideration | Potential Advantages/Disadvantages |
55
+ |--------------------------------------------|-------------------------------------------------|----------------------------------------------------------------------------------------------------|
56
+ | `CharacterTextSplitter` (Langchain) | Fixed Chunk Size (e.g., 1000), Overlap (e.g., 150) | Simpler, but less regard for semantic boundaries; could split sentences/thoughts. |
57
+ | `SentenceTransformersTokenTextSplitter` | Based on token limits of the embedding model (e.g., 256 Tokens) | More precise adaptation to the embedding model, but requires knowledge of tokenizer specifics. Could have led to a different number and granularity of chunks. |
58
+ | Smaller `chunk_size` with `RecursiveCharacterTextSplitter` | e.g., 500 characters, Overlap 50 | More, but more specific chunks. Could help with very detailed questions, but also fragment context more and require more chunks for an answer. |
59
+
60
+ *Decision Process:* Although other methods and configurations exist, the initial configuration of the `RecursiveCharacterTextSplitter` was retained for this project as it offered a good compromise between implementation effort, context preservation, and the resulting number of chunks for the chosen dataset. Deeper optimization of the chunking strategy would be a potential next step in further development to potentially enhance retrieval accuracy. The documentation of this project focuses on the overall process and the implementation of a core RAG pipeline with one form of adaptation.
61
+
62
+ ## 5. Choice of LLM
63
+
64
+ LLMs accessed via the Groq API were used for this RAG application:
65
+
66
+ | LLM Name (Groq) | Used for | Link/Reference |
67
+ |-----------------------|--------------------------------------------|------------------------------------|
68
+ | `llama3-70b-8192` | Final Answer Generation | [Groq Models](https://console.groq.com/docs/models) |
69
+ | `llama3-8b-8192` | Query Expansion (Adaptation Mechanism) | [Groq Models](https://console.groq.com/docs/models) |
70
+
71
+ *Reasoning:* `llama3-70b-8192` was chosen for answer generation due to its strong performance in synthesizing information and generating coherent text. For query expansion, the smaller `llama3-8b-8192` model was used to reduce the latency of this intermediate step, while still expecting good quality expansions.
72
+
73
+ ## 6. Test Method
74
+
75
+ The evaluation of the RAG application and the query expansion mechanism was conducted qualitatively. Specific test questions regarding the content of the course materials were formulated.
76
+ The procedure was as follows:
77
+ 1. Generate an answer based on the **original user query** and the chunks retrieved directly for it.
78
+ 2. Generate **expanded search queries** from the original user query using an LLM.
79
+ 3. Retrieve chunks based on these expanded queries, then collect and de-duplicate them to form an **expanded context**.
80
+ 4. Generate an answer based on the expanded context and the original user query.
81
+ 5. Conduct a **qualitative comparison** of the two generated answers in terms of depth of detail, correctness, and relevance to the context.
82
+ The hypothesis was that query expansion could lead to more comprehensive and precise answers.
83
+
84
+ Detailed test cases and results are documented in the Jupyter Notebook (`main_project.ipynb`) in Sections 4.2 and 5.
85
+
86
+ ## 7. Results
87
+
88
+ As the evaluation was primarily qualitative, the main observations are summarized here. Detailed examples can be found in the notebook.
89
+
90
+ | Model/Method | Observation |
91
+ |--------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
92
+ | Base RAG (Original Query) | Provides precise and good answers for direct questions (e.g., for "Was ist eine IT-Strategie?"). |
93
+ | RAG with Query Expansion | For the question "Was ist eine IT-Strategie?", there was hardly any difference compared to the base RAG. For the question "Welche Rolle spielt IT-Governance?", query expansion led to a **visibly more detailed and comprehensive answer**, which included additional relevant aspects. |
94
+
95
+ **Conclusion of Results**: Query expansion can improve answer quality by providing a broader and more relevant context for the LLM. However, the added value is highly dependent on the initial question and the quality of the generated expansions.
96
+
97
+ ## 8. Setup and Execution
98
+
99
+ To run this project locally:
100
+
101
+ 1. **Prerequisites**:
102
+ * Python 3.10 or higher (Python 3.12 was used).
103
+ * Git.
104
+ 2. **Clone Repository**:
105
+ ```bash
106
+ git clone [https://github.com/patronlaurel/RAG-BITS-Tutor.git](https://github.com/patronlaurel/RAG-BITS-Tutor.git)
107
+ cd RAG-BITS-Tutor
108
+ ```
109
+ *(Replace with your actual repository URL if different)*
110
+ 3. **Create and Activate Virtual Environment**:
111
+ ```bash
112
+ python -m venv .venv
113
+ # Windows:
114
+ .\.venv\Scripts\activate
115
+ # macOS/Linux:
116
+ source .venv/bin/activate
117
+ ```
118
+ 4. **Install Dependencies**:
119
+ ```bash
120
+ pip install -r requirements.txt
121
+ ```
122
+ (The `requirements.txt` file was generated using `pip freeze > requirements.txt` in the project and is included in the repository).
123
+ 5. **Set up API Key**:
124
+ * Create a file named `.env` in the project's root directory.
125
+ * Add your Groq API key: `GROQ_API_KEY=your_groq_api_key`
126
+ 6. **Start Jupyter Notebook**:
127
+ ```bash
128
+ jupyter lab
129
+ ```
130
+ Then open the notebook `main_project.ipynb`. The PDF data must be placed in the `data/` folder. The FAISS index (`faiss_index_bits/bits_tutor.index`) and chunks (`faiss_index_bits/bits_chunks.pkl`) are created and saved during the first run of Section 3.4 in the notebook.
131
+
132
+ ## 9. Technologies and Libraries Used
133
+ * Python 3.12
134
+ * Jupyter Lab
135
+ * Langchain
136
+ * Sentence Transformers (`Sahajtomar/German-semantic`)
137
+ * FAISS (Facebook AI Similarity Search)
138
+ * Groq API (`llama3-70b-8192`, `llama3-8b-8192`)
139
+ * PyPDF2
140
+ * NumPy
141
+ * Dotenv
142
+ * Tqdm
143
+
144
+ ## 10. Conclusion and Outlook
145
+ *(Summarize the key points from Section 7 of your notebook. Example:)*
146
+ This project successfully demonstrated the construction of a RAG application as a "Study Tutor." By implementing LLM-based query expansion, it was shown how the depth of detail and informational content of answers can be improved for certain queries. Key insights relate to the importance of data quality, appropriate model selection, and the potential of adaptation mechanisms. Future work could focus on extended evaluation methods, exploring further adaptation techniques like re-ranking, or developing an interactive user interface.
147
+
148
+ ## 11. References
149
+ *(List any specific scientific papers, blog posts, or other sources you heavily relied on for your methodology or understanding here. For this project, direct code references and general methodology are likely sufficient.)*