Blaiseboy commited on
Commit
487419a
ยท
verified ยท
1 Parent(s): fda5cab

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +163 -0
  2. app.py +55 -0
  3. medical_chatbot.py +876 -0
  4. requirements.txt +12 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: BioGPT Medical Assistant
3
+ emoji: ๐Ÿฅ
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ short_description: AI pediatric medical chatbot powered by BioGPT
12
+ tags:
13
+ - medical
14
+ - healthcare
15
+ - pediatric
16
+ - biogpt
17
+ - chatbot
18
+ - medicine
19
+ - health
20
+ models:
21
+ - microsoft/BioGPT-Large
22
+ datasets: []
23
+ ---
24
+
25
+
26
+ # ๐Ÿฅ BioGPT Medical Assistant
27
+
28
+ An AI-powered medical chatbot specialized in pediatric medicine, built with Microsoft's BioGPT model and deployed via Gradio.
29
+
30
+ ## ๐ŸŽฏ Features
31
+
32
+ - **Specialized Medical AI**: Powered by BioGPT-Large, trained on extensive medical literature
33
+ - **Pediatric Focus**: Specialized knowledge in children's health and medicine
34
+ - **Evidence-Based**: Responses based on medical research and clinical guidelines
35
+ - **Interactive Chat**: User-friendly Gradio interface with medical-themed design
36
+ - **Safety First**: Clear disclaimers and guidance on when to seek professional care
37
+
38
+ ## ๐Ÿฉบ Capabilities
39
+
40
+ ### Medical Topics Covered:
41
+ - **Pediatric Symptoms**: Fever, cough, rash, digestive issues
42
+ - **Treatment Guidance**: Evidence-based treatment information
43
+ - **Emergency Signs**: When to seek immediate medical attention
44
+ - **Prevention**: Vaccination schedules, disease prevention
45
+ - **Development**: Growth and developmental milestones
46
+
47
+ ### Key Features:
48
+ - Real-time medical information retrieval
49
+ - Context-aware responses using medical knowledge base
50
+ - Conversational AI with natural language understanding
51
+ - Memory-efficient deployment with model quantization
52
+ - Comprehensive medical disclaimers and safety information
53
+
54
+ ## ๐Ÿš€ Technical Details
55
+
56
+ - **Base Model**: Microsoft BioGPT-Large
57
+ - **Framework**: Gradio for web interface
58
+ - **Deployment**: Hugging Face Spaces
59
+ - **Optimization**: 8-bit quantization for efficient GPU usage
60
+ - **Embeddings**: Sentence transformers for context retrieval
61
+ - **Device Support**: CUDA GPU with CPU fallback
62
+
63
+ ## โš ๏ธ Important Medical Disclaimer
64
+
65
+ **This AI assistant provides educational medical information only and is NOT a substitute for professional medical advice, diagnosis, or treatment.**
66
+
67
+ ### Always Consult Healthcare Professionals For:
68
+ - Medical diagnosis and treatment decisions
69
+ - Prescription medications
70
+ - Personalized medical advice
71
+ - Emergency medical situations
72
+
73
+ ### Emergency Situations - Call Emergency Services:
74
+ - Difficulty breathing or choking
75
+ - Severe allergic reactions
76
+ - Unconsciousness
77
+ - Severe injuries
78
+ - Persistent high fever (>104ยฐF/40ยฐC)
79
+
80
+ ## ๐Ÿ› ๏ธ Installation & Setup
81
+
82
+ ### For Local Development:
83
+
84
+ 1. **Clone the repository**:
85
+ ```bash
86
+ git clone <your-repo-url>
87
+ cd biogpt-medical-chatbot
88
+ ```
89
+
90
+ 2. **Install dependencies**:
91
+ ```bash
92
+ pip install -r requirements.txt
93
+ ```
94
+
95
+ 3. **Run the application**:
96
+ ```bash
97
+ python app.py
98
+ ```
99
+
100
+ ### For Hugging Face Spaces Deployment:
101
+
102
+ 1. Create a new Space on Hugging Face
103
+ 2. Choose "Gradio" as the Space SDK
104
+ 3. Upload the following files:
105
+ - `app.py`
106
+ - `requirements.txt`
107
+ - `README.md`
108
+ 4. The Space will automatically build and deploy
109
+
110
+ ## ๐Ÿ“ File Structure
111
+
112
+ ```
113
+ โ”œโ”€โ”€ app.py # Main Gradio application
114
+ โ”œโ”€โ”€ requirements.txt # Python dependencies
115
+ โ””โ”€โ”€ README.md # This file
116
+ ```
117
+
118
+ ## ๐Ÿ”ง Configuration
119
+
120
+ The chatbot automatically detects available hardware and configures accordingly:
121
+
122
+ - **GPU Available**: Uses CUDA with 8-bit quantization
123
+ - **CPU Only**: Falls back to CPU with appropriate settings
124
+ - **Model Loading**: Attempts BioGPT-Large, falls back to smaller models if needed
125
+
126
+ ## ๐Ÿ’ก Usage Tips
127
+
128
+ ### For Best Results:
129
+ 1. **Be Specific**: Ask detailed questions about symptoms or conditions
130
+ 2. **Include Context**: Mention age, duration of symptoms, etc.
131
+ 3. **Medical Focus**: Focus on pediatric and general medical topics
132
+ 4. **Clear Language**: Use clear, simple language in your questions
133
+
134
+ ### Example Queries:
135
+ - "What causes fever in children?"
136
+ - "My 3-year-old has been coughing for 2 days, what should I do?"
137
+ - "When should I be concerned about my baby's breathing?"
138
+ - "What are the signs of dehydration in infants?"
139
+
140
+ ## ๐Ÿค Contributing
141
+
142
+ Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
143
+
144
+ ## ๐Ÿ“„ License
145
+
146
+ This project is licensed under the MIT License - see the LICENSE file for details.
147
+
148
+ ## ๐Ÿ™ Acknowledgments
149
+
150
+ - **Microsoft Research** for the BioGPT model
151
+ - **Hugging Face** for model hosting and transformers library
152
+ - **Gradio** for the web interface framework
153
+ - **Medical Community** for evidence-based medical knowledge
154
+
155
+ ## ๐Ÿ“ž Support
156
+
157
+ For technical issues or questions about deployment, please open an issue in this repository.
158
+
159
+ **For medical emergencies, always contact emergency services or healthcare professionals immediately.**
160
+
161
+ ---
162
+
163
+ *Last updated: August 2025*
app.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from medical_chatbot import ColabBioGPTChatbot
3
+ import tempfile
4
+ import os
5
+
6
+ # Global chatbot instance (loaded once)
7
+ chatbot = ColabBioGPTChatbot(use_gpu=True, use_8bit=True)
8
+
9
+ # Load medical data
10
+ def upload_and_initialize(txt_file):
11
+ if txt_file is None:
12
+ return "โŒ Please upload a .txt medical file to initialize the chatbot.", gr.update(visible=False)
13
+
14
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".txt") as tmp:
15
+ tmp.write(txt_file.read())
16
+ tmp_path = tmp.name
17
+
18
+ success = chatbot.load_medical_data(tmp_path)
19
+ os.unlink(tmp_path) # Clean up temp file
20
+
21
+ if success:
22
+ return "โœ… Medical data uploaded and processed. You can now ask medical questions.", gr.update(visible=True)
23
+ else:
24
+ return "โŒ Failed to process the file. Please ensure it's a valid medical text file.", gr.update(visible=False)
25
+
26
+ # Chat interface
27
+ def chat_interface(user_input):
28
+ response = chatbot.chat(user_input)
29
+ return response
30
+
31
+ with gr.Blocks(title="BioGPT Medical Chatbot") as demo:
32
+ gr.Markdown("## ๐Ÿฅ BioGPT Medical Chatbot\nUpload pediatric medical data and ask medical questions.\nโš ๏ธ Educational use only.")
33
+
34
+ with gr.Row():
35
+ file_input = gr.File(label="๐Ÿ“ Upload Medical .txt File", file_types=[".txt"])
36
+ upload_button = gr.Button("๐Ÿ“ค Upload and Process")
37
+
38
+ upload_output = gr.Textbox(label="Status", interactive=False)
39
+
40
+ chatbox = gr.ChatInterface(fn=chat_interface,
41
+ chatbot=gr.Chatbot(label="๐Ÿฉบ Medical Assistant"),
42
+ textbox=gr.Textbox(placeholder="Ask a pediatric medical question here..."),
43
+ submit_btn="Send",
44
+ show_copy_button=True)
45
+
46
+ # Hide chat until file is uploaded
47
+ chatbox.visible = False
48
+
49
+ upload_button.click(
50
+ fn=upload_and_initialize,
51
+ inputs=[file_input],
52
+ outputs=[upload_output, chatbox]
53
+ )
54
+
55
+ demo.launch()
medical_chatbot.py ADDED
@@ -0,0 +1,876 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """Medical Chatbot.ipynb
3
+
4
+ Automatically generated by Colab.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/14KonfLdcmy7nbiVr9Cxm18kT9XkehBiW
8
+ """
9
+
10
+ # Setup and Installation
11
+
12
+ import torch
13
+ print("๐Ÿ–ฅ๏ธ System Check:")
14
+ print(f"CUDA available: {torch.cuda.is_available()}")
15
+ if torch.cuda.is_available():
16
+ print(f"GPU device: {torch.cuda.get_device_name(0)}")
17
+ print(f"GPU memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB")
18
+ else:
19
+ print("โš ๏ธ No GPU detected - BioGPT will run on CPU (much slower)")
20
+
21
+ print("\n๐Ÿ”ง Installing required packages...")
22
+
23
+ # Install required packages
24
+ !pip install -q transformers>=4.21.0
25
+ !pip install -q torch>=1.12.0
26
+ !pip install -q sentence-transformers
27
+ !pip install -q faiss-cpu # CPU-only FAISS (more reliable)
28
+ !pip install -q accelerate
29
+ !pip install -q bitsandbytes # For memory optimization
30
+ !pip install -q datasets
31
+ !pip install -q numpy
32
+
33
+ print("โœ… All packages installed!")
34
+
35
+ # Import Libraries
36
+
37
+ import os
38
+ import re
39
+ import torch
40
+ import warnings
41
+ import numpy as np
42
+ import faiss # FAISS for vector search
43
+ !pip install sacremoses
44
+ from transformers import (
45
+ AutoTokenizer,
46
+ AutoModelForCausalLM,
47
+ pipeline,
48
+ BitsAndBytesConfig
49
+ )
50
+ from sentence_transformers import SentenceTransformer
51
+ from typing import List, Dict, Optional
52
+ import time
53
+ from datetime import datetime
54
+ import json
55
+ import pickle
56
+
57
+ # Suppress warnings for cleaner output
58
+ warnings.filterwarnings('ignore')
59
+
60
+ print("๐Ÿ“š Libraries imported successfully!")
61
+ print(f"๐Ÿ” FAISS version: {faiss.__version__}")
62
+ print("๐ŸŽฏ Using FAISS for vector search (ChromaDB completely removed)")
63
+
64
+ # File Upload Helper
65
+
66
+ from google.colab import files
67
+ import io
68
+
69
+ def upload_medical_data():
70
+ """Upload your Pediatric_cleaned.txt file"""
71
+ print("๐Ÿ“ Please upload your Pediatric_cleaned.txt file:")
72
+ uploaded = files.upload()
73
+
74
+ # Get the uploaded file
75
+ filename = list(uploaded.keys())[0]
76
+ print(f"โœ… File '{filename}' uploaded successfully!")
77
+
78
+ # Read the content
79
+ content = uploaded[filename].decode('utf-8')
80
+
81
+ # Save it locally in Colab
82
+ with open('Pediatric_cleaned.txt', 'w', encoding='utf-8') as f:
83
+ f.write(content)
84
+
85
+ print(f"๐Ÿ“ File saved as 'Pediatric_cleaned.txt' ({len(content)} characters)")
86
+ return 'Pediatric_cleaned.txt'
87
+
88
+ medical_file = 'Pediatric_cleaned.txt'
89
+
90
+ # BioGPT Medical Chatbot Class
91
+
92
+
93
+ class ColabBioGPTChatbot:
94
+ def __init__(self, use_gpu=True, use_8bit=True):
95
+ """Initialize BioGPT chatbot optimized for Google Colab"""
96
+ print("๐Ÿฅ Initializing Professional BioGPT Medical Chatbot...")
97
+
98
+ self.device = "cuda" if torch.cuda.is_available() and use_gpu else "cpu"
99
+ self.use_8bit = use_8bit and torch.cuda.is_available()
100
+
101
+ print(f"๐Ÿ–ฅ๏ธ Using device: {self.device}")
102
+ if self.use_8bit:
103
+ print("๐Ÿ’พ Using 8-bit quantization for memory efficiency")
104
+
105
+ # Setup components
106
+ self.setup_embeddings()
107
+ self.setup_faiss_index() # Ensure this sets up self.collection if needed
108
+ self.setup_biogpt()
109
+
110
+ # Conversation tracking
111
+ self.conversation_history = []
112
+ self.knowledge_chunks = []
113
+
114
+ print("โœ… BioGPT Medical Chatbot ready for professional medical assistance!")
115
+
116
+ def setup_embeddings(self):
117
+ """Setup medical-optimized embeddings"""
118
+ print("๐Ÿ”ง Loading medical embeddings...")
119
+ try:
120
+ # Use a medical-focused embedding model if available, otherwise general
121
+ self.embedding_model = SentenceTransformer('all-MiniLM-L6-v2')
122
+ self.embedding_dim = self.embedding_model.get_sentence_embedding_dimension()
123
+ print(f"โœ… Embeddings loaded (dimension: {self.embedding_dim})")
124
+ self.use_embeddings = True
125
+ except Exception as e:
126
+ print(f"โš ๏ธ Embeddings failed: {e}")
127
+ self.embedding_model = None
128
+ self.embedding_dim = 384 # default dimension
129
+ self.use_embeddings = False
130
+
131
+ def setup_faiss_index(self):
132
+ """Setup faiss for CPU-based vector search"""
133
+ print("๐Ÿ”ง Setting up FAISS vector database...")
134
+ try:
135
+ print(' Using CPU FAISS index for maximum compatibility')
136
+ self.faiss_index = faiss.IndexFlatIP(self.embedding_dim) # In-memory for Colab
137
+ self.use_gpu_faiss = False
138
+ self.faiss_ready = True # Set to True when index is ready
139
+ self.collection = self.faiss_index # Initialize collection attribute
140
+ print("โœ… FAISS CPU index initialized successfully")
141
+ except Exception as e:
142
+ print(f"โŒ FAISS setup failed: {e}")
143
+ self.faiss_index = None
144
+ self.faiss_ready = False
145
+ self.collection = None # Ensure collection is None on failure
146
+
147
+ def setup_biogpt(self):
148
+ """Setup BioGPT model with optimizations for Colab"""
149
+ print("๐Ÿง  Loading BioGPT-Large (this may take a few minutes on first run)...")
150
+
151
+ model_name = "microsoft/BioGPT-Large"
152
+
153
+ try:
154
+ # Setup quantization config for memory efficiency
155
+ if self.use_8bit:
156
+ quantization_config = BitsAndBytesConfig(
157
+ load_in_8bit=True,
158
+ llm_int8_threshold=6.0,
159
+ llm_int8_has_fp16_weight=False,
160
+ )
161
+ else:
162
+ quantization_config = None
163
+
164
+ # Load tokenizer
165
+ print(" Loading tokenizer...")
166
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name)
167
+
168
+ # Set padding token
169
+ if self.tokenizer.pad_token is None:
170
+ self.tokenizer.pad_token = self.tokenizer.eos_token
171
+
172
+ # Load model
173
+ print(" Loading BioGPT model...")
174
+ start_time = time.time()
175
+
176
+ self.model = AutoModelForCausalLM.from_pretrained(
177
+ model_name,
178
+ quantization_config=quantization_config,
179
+ torch_dtype=torch.float16 if self.device == "cuda" else torch.float32,
180
+ device_map="auto" if self.device == "cuda" else None,
181
+ trust_remote_code=True
182
+ )
183
+
184
+ # Move to device if not using device_map
185
+ if self.device == "cuda" and quantization_config is None:
186
+ self.model = self.model.to(self.device)
187
+
188
+ load_time = time.time() - start_time
189
+ print(f"โœ… BioGPT loaded successfully! ({load_time:.1f} seconds)")
190
+
191
+ # Test the model
192
+ self.test_biogpt()
193
+
194
+ except Exception as e:
195
+ print(f"โŒ BioGPT loading failed: {e}")
196
+ print("๐Ÿ’ก Falling back to smaller medical model...")
197
+ self.setup_fallback_model()
198
+
199
+ def setup_fallback_model(self):
200
+ """Setup fallback model if BioGPT fails"""
201
+ try:
202
+ fallback_model = "microsoft/DialoGPT-medium"
203
+ print(f"๐Ÿ”„ Loading fallback model: {fallback_model}")
204
+
205
+ self.tokenizer = AutoTokenizer.from_pretrained(fallback_model)
206
+ self.model = AutoModelForCausalLM.from_pretrained(fallback_model)
207
+
208
+ if self.tokenizer.pad_token is None:
209
+ self.tokenizer.pad_token = self.tokenizer.eos_token
210
+
211
+ if self.device == "cuda":
212
+ self.model = self.model.to(self.device)
213
+
214
+ print("โœ… Fallback model loaded")
215
+
216
+ except Exception as e:
217
+ print(f"โŒ All models failed: {e}")
218
+ self.model = None
219
+ self.tokenizer = None
220
+
221
+ def test_biogpt(self):
222
+ """Test BioGPT with a simple medical query"""
223
+ print("๐Ÿงช Testing BioGPT...")
224
+ try:
225
+ test_prompt = "Fever in children can be caused by"
226
+ inputs = self.tokenizer(test_prompt, return_tensors="pt")
227
+
228
+ if self.device == "cuda": # Ensure inputs are on the correct device
229
+ inputs = {k: v.to(self.device) for k, v in inputs.items()}
230
+
231
+ with torch.no_grad():
232
+ outputs = self.model.generate(
233
+ **inputs,
234
+ max_new_tokens=20,
235
+ do_sample=True,
236
+ temperature=0.7,
237
+ pad_token_id=self.tokenizer.eos_token_id
238
+ )
239
+
240
+ response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
241
+ print(f"โœ… BioGPT test successful!")
242
+ print(f" Test response: {response}")
243
+
244
+ except Exception as e:
245
+ print(f"โš ๏ธ BioGPT test failed: {e}")
246
+
247
+ def load_medical_data(self, file_path: str):
248
+ """Load and process medical data with progress tracking"""
249
+ print(f"๐Ÿ“– Loading medical data from {file_path}...")
250
+
251
+ try:
252
+ with open(file_path, 'r', encoding='utf-8') as f:
253
+ text = f.read()
254
+ print(f"๐Ÿ“„ File loaded: {len(text):,} characters")
255
+ except FileNotFoundError:
256
+ print(f"โŒ File {file_path} not found!")
257
+ return False
258
+
259
+ # Create chunks optimized for medical content
260
+ print("๐Ÿ“ Creating medical-optimized chunks...")
261
+ chunks = self.create_medical_chunks(text)
262
+ print(f"๐Ÿ“‹ Created {len(chunks)} medical chunks")
263
+
264
+ self.knowledge_chunks = chunks
265
+
266
+ # Generate embeddings with progress and add to FAISS index
267
+ if self.use_embeddings and self.embedding_model and self.faiss_ready:
268
+ return self.generate_embeddings_with_progress(chunks)
269
+
270
+ print("โœ… Medical data loaded (text search mode)")
271
+ return True
272
+
273
+ def create_medical_chunks(self, text: str, chunk_size: int = 400) -> List[Dict]:
274
+ """Create medically-optimized text chunks"""
275
+ chunks = []
276
+
277
+ # Split by medical sections first
278
+ medical_sections = self.split_by_medical_sections(text)
279
+
280
+ chunk_id = 0
281
+ for section in medical_sections:
282
+ if len(section.split()) > chunk_size:
283
+ # Split large sections by sentences
284
+ sentences = re.split(r'[.!?]+', section)
285
+ current_chunk = ""
286
+
287
+ for sentence in sentences:
288
+ sentence = sentence.strip()
289
+ if not sentence:
290
+ continue
291
+
292
+ if len(current_chunk.split()) + len(sentence.split()) < chunk_size:
293
+ current_chunk += sentence + ". "
294
+ else:
295
+ if current_chunk.strip():
296
+ chunks.append({
297
+ 'id': chunk_id,
298
+ 'text': current_chunk.strip(),
299
+ 'medical_focus': self.identify_medical_focus(current_chunk)
300
+ })
301
+ chunk_id += 1
302
+ current_chunk = sentence + ". "
303
+
304
+ if current_chunk.strip():
305
+ chunks.append({
306
+ 'id': chunk_id,
307
+ 'text': current_chunk.strip(),
308
+ 'medical_focus': self.identify_medical_focus(current_chunk)
309
+ })
310
+ chunk_id += 1
311
+ else:
312
+ chunks.append({
313
+ 'id': chunk_id,
314
+ 'text': section,
315
+ 'medical_focus': self.identify_medical_focus(section)
316
+ })
317
+ chunk_id += 1
318
+
319
+ return chunks
320
+
321
+ def split_by_medical_sections(self, text: str) -> List[str]:
322
+ """Split text by medical sections"""
323
+ # Look for medical section headers
324
+ section_patterns = [
325
+ r'\n\s*(?:SYMPTOMS?|TREATMENT|DIAGNOSIS|CAUSES?|PREVENTION|MANAGEMENT).*?\n',
326
+ r'\n\s*\d+\.\s+', # Numbered sections
327
+ r'\n\n+' # Paragraph breaks
328
+ ]
329
+
330
+ sections = [text]
331
+ for pattern in section_patterns:
332
+ new_sections = []
333
+ for section in sections:
334
+ splits = re.split(pattern, section, flags=re.IGNORECASE)
335
+ new_sections.extend([s.strip() for s in splits if len(s.strip()) > 100])
336
+ sections = new_sections
337
+
338
+ return sections
339
+
340
+ def identify_medical_focus(self, text: str) -> str:
341
+ """Identify the medical focus of a text chunk"""
342
+ text_lower = text.lower()
343
+
344
+ # Medical categories
345
+ categories = {
346
+ 'pediatric_symptoms': ['fever', 'cough', 'rash', 'vomiting', 'diarrhea'],
347
+ 'treatments': ['treatment', 'therapy', 'medication', 'antibiotics'],
348
+ 'diagnosis': ['diagnosis', 'diagnostic', 'symptoms', 'signs'],
349
+ 'emergency': ['emergency', 'urgent', 'serious', 'hospital'],
350
+ 'prevention': ['prevention', 'vaccine', 'immunization', 'avoid']
351
+ }
352
+
353
+ for category, keywords in categories.items():
354
+ if any(keyword in text_lower for keyword in keywords):
355
+ return category
356
+
357
+ return 'general_medical'
358
+
359
+ def generate_embeddings_with_progress(self, chunks: List[Dict]) -> bool:
360
+ """Generate embeddings with progress tracking and add to FAISS index"""
361
+ print("๐Ÿ”ฎ Generating medical embeddings and adding to FAISS index...")
362
+
363
+ if not self.embedding_model or not self.faiss_index:
364
+ print("โŒ Embedding model or FAISS index not available.")
365
+ return False
366
+
367
+ try:
368
+ texts = [chunk['text'] for chunk in chunks]
369
+
370
+ # Generate embeddings in batches with progress
371
+ batch_size = 32
372
+ all_embeddings = []
373
+
374
+ for i in range(0, len(texts), batch_size):
375
+ batch_texts = texts[i:i+batch_size]
376
+ batch_embeddings = self.embedding_model.encode(batch_texts, show_progress_bar=False)
377
+ all_embeddings.extend(batch_embeddings)
378
+
379
+ # Show progress
380
+ progress = min(i + batch_size, len(texts))
381
+ print(f" Progress: {progress}/{len(texts)} chunks processed", end='\r')
382
+
383
+ print(f"\n โœ… Generated embeddings for {len(texts)} chunks")
384
+
385
+ # Add embeddings to FAISS index
386
+ print("๐Ÿ’พ Adding embeddings to FAISS index...")
387
+ self.faiss_index.add(np.array(all_embeddings))
388
+
389
+ print("โœ… Medical embeddings added to FAISS index successfully!")
390
+ return True
391
+
392
+ except Exception as e:
393
+ print(f"โŒ Embedding generation or FAISS add failed: {e}")
394
+ return False
395
+
396
+
397
+ def retrieve_medical_context(self, query: str, n_results: int = 3) -> List[str]:
398
+ """Retrieve relevant medical context using embeddings or keyword search"""
399
+ if self.use_embeddings and self.embedding_model and self.faiss_ready:
400
+ try:
401
+ # Generate query embedding
402
+ query_embedding = self.embedding_model.encode([query])
403
+
404
+ # Search for similar content in FAISS index
405
+ distances, indices = self.faiss_index.search(np.array(query_embedding), n_results)
406
+
407
+ # Retrieve the corresponding chunks
408
+ context_chunks = [self.knowledge_chunks[i]['text'] for i in indices[0] if i != -1]
409
+
410
+ if context_chunks:
411
+ return context_chunks
412
+
413
+ except Exception as e:
414
+ print(f"โš ๏ธ Embedding search failed: {e}")
415
+
416
+ # Fallback to keyword search
417
+ print("โš ๏ธ Falling back to keyword search.")
418
+ return self.keyword_search_medical(query, n_results)
419
+
420
+
421
+ def keyword_search_medical(self, query: str, n_results: int) -> List[str]:
422
+ """Medical-focused keyword search"""
423
+ if not self.knowledge_chunks:
424
+ return []
425
+
426
+ query_words = set(query.lower().split())
427
+ chunk_scores = []
428
+
429
+ for chunk_info in self.knowledge_chunks:
430
+ chunk_text = chunk_info['text']
431
+ chunk_words = set(chunk_text.lower().split())
432
+
433
+ # Calculate relevance score
434
+ word_overlap = len(query_words.intersection(chunk_words))
435
+ base_score = word_overlap / len(query_words) if query_words else 0
436
+
437
+ # Boost medical content
438
+ medical_boost = 0
439
+ if chunk_info.get('medical_focus') in ['pediatric_symptoms', 'treatments', 'diagnosis']:
440
+ medical_boost = 0.5
441
+
442
+ final_score = base_score + medical_boost
443
+
444
+ if final_score > 0:
445
+ chunk_scores.append((final_score, chunk_text))
446
+
447
+ # Return top matches
448
+ chunk_scores.sort(reverse=True)
449
+ return [chunk for _, chunk in chunk_scores[:n_results]]
450
+
451
+ def generate_biogpt_response(self, context: str, query: str) -> str:
452
+ """Generate medical response using BioGPT"""
453
+ if not self.model or not self.tokenizer:
454
+ return "Medical model not available. Please check the setup."
455
+
456
+ try:
457
+ # Create medical-focused prompt
458
+ prompt = f"""Medical Context: {context[:800]}
459
+
460
+ Question: {query}
461
+
462
+ Medical Answer:"""
463
+
464
+ # Tokenize input
465
+ inputs = self.tokenizer(
466
+ prompt,
467
+ return_tensors="pt",
468
+ truncation=True,
469
+ max_length=1024
470
+ )
471
+
472
+ # Move inputs to the correct device
473
+ if self.device == "cuda":
474
+ inputs = {k: v.to(self.device) for k, v in inputs.items()}
475
+
476
+ # Generate response
477
+ with torch.no_grad():
478
+ outputs = self.model.generate(
479
+ **inputs,
480
+ max_new_tokens=150,
481
+ do_sample=True,
482
+ temperature=0.7,
483
+ top_p=0.9,
484
+ pad_token_id=self.tokenizer.eos_token_id,
485
+ repetition_penalty=1.1
486
+ )
487
+
488
+ # Decode response
489
+ full_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
490
+
491
+ # Extract just the generated part
492
+ if "Medical Answer:" in full_response:
493
+ generated_response = full_response.split("Medical Answer:")[-1].strip()
494
+ else:
495
+ generated_response = full_response[len(prompt):].strip()
496
+
497
+ # Clean up response
498
+ cleaned_response = self.clean_medical_response(generated_response)
499
+
500
+ return cleaned_response
501
+
502
+ except Exception as e:
503
+ print(f"โš ๏ธ BioGPT generation failed: {e}")
504
+ return self.fallback_response(context, query)
505
+
506
+ def clean_medical_response(self, response: str) -> str:
507
+ """Clean and format medical response"""
508
+ # Remove incomplete sentences and limit length
509
+ sentences = re.split(r'[.!?]+', response)
510
+ clean_sentences = []
511
+
512
+ for sentence in sentences:
513
+ sentence = sentence.strip()
514
+ if len(sentence) > 10 and not sentence.endswith(('and', 'or', 'but', 'however')):
515
+ clean_sentences.append(sentence)
516
+ if len(clean_sentences) >= 3: # Limit to 3 sentences
517
+ break
518
+
519
+ if clean_sentences:
520
+ cleaned = '. '.join(clean_sentences) + '.'
521
+ else:
522
+ cleaned = response[:200] + '...' if len(response) > 200 else response
523
+
524
+ return cleaned
525
+
526
+ def fallback_response(self, context: str, query: str) -> str:
527
+ """Fallback response when BioGPT fails"""
528
+ # Extract key sentences from context
529
+ sentences = [s.strip() for s in context.split('.') if len(s.strip()) > 20]
530
+
531
+ if sentences:
532
+ response = sentences[0] + '.'
533
+ if len(sentences) > 1:
534
+ response += ' ' + sentences[1] + '.'
535
+ else:
536
+ response = context[:300] + '...'
537
+
538
+ return response
539
+
540
+ def handle_conversational_interactions(self, query: str) -> Optional[str]:
541
+ """Handle comprehensive conversational interactions"""
542
+ query_lower = query.lower().strip()
543
+
544
+ # Use more specific patterns for greetings
545
+ greeting_patterns = [
546
+ r'^\s*(hello|hi|hey|hiya|howdy)\s*$',
547
+ r'^\s*(good morning|good afternoon|good evening|good day)\s*$',
548
+ r'^\s*(what\'s up|whats up|sup|yo)\s*$',
549
+ r'^\s*(greetings|salutations)\s*$',
550
+ r'^\s*(how are you|how are you doing|how\'s it going|hows it going)\s*$',
551
+ r'^\s*(good to meet you|nice to meet you|pleased to meet you)\s*$'
552
+ ]
553
+
554
+ for pattern in greeting_patterns:
555
+ if re.match(pattern, query_lower):
556
+ responses = [
557
+ "๐Ÿ‘‹ Hello! I'm BioGPT, your professional medical AI assistant specialized in pediatric medicine. I'm here to provide evidence-based medical information. What health concern can I help you with today?",
558
+ "๐Ÿฅ Hi there! I'm a medical AI assistant powered by BioGPT, trained on medical literature. I can help answer questions about children's health and medical conditions. How can I assist you?",
559
+ "๐Ÿ‘‹ Greetings! I'm your AI medical consultant, ready to help with pediatric health questions using the latest medical knowledge. What would you like to know about?"
560
+ ]
561
+ return np.random.choice(responses)
562
+
563
+ # ===== THANKS & APPRECIATION =====
564
+ thanks_patterns = [
565
+ ['thank you', 'thanks', 'thx', 'ty', 'thank you so much', 'thanks a lot', 'much appreciated', 'really appreciate it', 'i appreciate it', 'grateful', 'that was helpful', 'very helpful', 'awesome', 'perfect', 'great', 'excellent', 'wonderful', 'that helped', 'exactly what i needed', 'very informative', 'good information']
566
+ ]
567
+
568
+ for pattern_group in thanks_patterns:
569
+ if any(keyword in query_lower for keyword in pattern_group):
570
+ responses = [
571
+ "๐Ÿ™ You're very welcome! I'm glad I could provide helpful medical information. Remember, this is educational guidance - always consult your healthcare provider for personalized medical advice. Feel free to ask more questions!",
572
+ "๐Ÿ˜Š Happy to help! Providing accurate medical information is what I'm here for. If you have any other pediatric health questions, don't hesitate to ask.",
573
+ "๐Ÿค— You're most welcome! I'm pleased the medical information was useful. Please remember to consult with healthcare professionals for any medical decisions. What else can I help you with?"
574
+ ]
575
+ return np.random.choice(responses)
576
+
577
+ # ===== GOODBYES =====
578
+ goodbye_patterns = [
579
+ ['bye', 'goodbye', 'farewell', 'see you', 'later', 'see ya', 'catch you later', 'talk to you later', 'ttyl', 'have a good day', 'have a great day', 'take care', 'until next time', 'i need to go', 'that\'s all for now', 'no more questions']
580
+ ]
581
+
582
+ for pattern_group in goodbye_patterns:
583
+ if any(keyword in query_lower for keyword in pattern_group):
584
+ responses = [
585
+ "๐Ÿ‘‹ Goodbye! Take excellent care of yourself and your little ones. Remember, I'm here whenever you need reliable pediatric medical information. Stay healthy! ๐Ÿฅ",
586
+ "๐ŸŒŸ Farewell! Wishing you and your family good health. Don't hesitate to return if you have more medical questions. Take care!",
587
+ "๐Ÿ‘‹ See you later! Hope the medical information was helpful. Remember to always consult healthcare professionals for medical decisions. Stay well!"
588
+ ]
589
+ return np.random.choice(responses)
590
+
591
+ # ===== ABOUT/HELP QUESTIONS =====
592
+ about_patterns = [
593
+ ['what are you', 'who are you', 'tell me about yourself', 'what do you do', 'what can you help with', 'what can you do', 'how can you help', 'what are your capabilities', 'help', 'help me', 'i need help', 'can you help', 'how do i use this', 'how does this work', 'what should i ask']
594
+ ]
595
+
596
+ for pattern_group in about_patterns:
597
+ if any(keyword in query_lower for keyword in pattern_group):
598
+ return """๐Ÿค– **About BioGPT Medical Assistant**
599
+
600
+ I'm an AI medical assistant powered by BioGPT-Large, a specialized medical AI model trained on extensive medical literature. Here's what I can help you with:
601
+
602
+ ๐Ÿฉบ **Medical Specialties:**
603
+ โ€ข Pediatric medicine and children's health
604
+ โ€ข Symptom explanation and medical conditions
605
+ โ€ข Treatment options and medical procedures
606
+ โ€ข When to seek medical care
607
+ โ€ข Prevention and wellness guidance
608
+
609
+ ๐ŸŽฏ **How to Use Me:**
610
+ โ€ข Ask specific medical questions: "What causes fever in children?"
611
+ โ€ข Describe symptoms: "My child has a persistent cough"
612
+ โ€ข Seek guidance: "When should I call the doctor?"
613
+ โ€ข Get information: "How do I treat dehydration?"
614
+
615
+ โš ๏ธ **Important Reminder:**
616
+ I provide educational medical information based on medical literature, but I'm not a substitute for professional medical advice. Always consult qualified healthcare providers for:
617
+ โ€ข Medical emergencies
618
+ โ€ข Diagnosis and treatment decisions
619
+ โ€ข Personalized medical advice
620
+ โ€ข Medication guidance
621
+
622
+ ๐Ÿ’ก **Tip:** Be specific in your questions for the most helpful responses!
623
+
624
+ What pediatric health topic would you like to explore?"""
625
+
626
+ # ===== SMALL TALK & PERSONAL QUESTIONS =====
627
+ personal_patterns = [
628
+ ['how are you feeling', 'are you okay', 'how\'s your day', 'are you smart', 'are you intelligent', 'do you know everything', 'are you human', 'are you real', 'are you a robot', 'are you ai', 'you\'re smart', 'you\'re helpful', 'good job', 'well done', 'impressive']
629
+ ]
630
+
631
+ for pattern_group in personal_patterns:
632
+ if any(keyword in query_lower for keyword in pattern_group):
633
+ responses = [
634
+ "๐Ÿค– I'm an AI medical assistant, so I don't have feelings, but I'm functioning well and ready to help with medical questions! My purpose is to provide reliable pediatric health information. What can I help you with?",
635
+ "๐Ÿ˜Š Thank you for asking! As an AI, I'm always ready to assist with medical information. I'm designed to help with pediatric health questions using evidence-based medical knowledge. How can I help you today?",
636
+ "๐ŸŽฏ I'm doing what I do best - providing medical information! I'm an AI trained on medical literature to help with pediatric health questions. What medical topic interests you?"
637
+ ]
638
+ return np.random.choice(responses)
639
+
640
+ # ===== CONFUSED/UNCLEAR INPUT =====
641
+ confusion_patterns = [
642
+ ['i don\'t know', 'not sure', 'confused', 'unclear', 'help me understand', 'what do you mean', 'i don\'t understand', 'can you explain', 'huh', 'i\'m lost', 'i\'m confused', 'this is confusing']
643
+ ]
644
+
645
+ for pattern_group in confusion_patterns:
646
+ if any(keyword in query_lower for keyword in pattern_group):
647
+ return """๐Ÿค” **I understand it can be confusing!** Let me help you get started.
648
+
649
+ ๐Ÿ’ก **Try asking questions like:**
650
+
651
+ ๐Ÿฉบ **Symptoms:**
652
+ โ€ข "What causes [symptom] in children?"
653
+ โ€ข "My child has [symptom], what should I do?"
654
+
655
+ ๐Ÿ’Š **Treatments:**
656
+ โ€ข "How do I treat [condition] in children?"
657
+ โ€ข "What are treatment options for [condition]?"
658
+
659
+ ๐Ÿšจ **Urgency:**
660
+ โ€ข "When should I call the doctor about [symptom]?"
661
+ โ€ข "Is [symptom] serious in children?"
662
+
663
+ ๐Ÿ›ก๏ธ **Prevention:**
664
+ โ€ข "How can I prevent [condition]?"
665
+ โ€ข "What are the warning signs of [condition]?"
666
+
667
+ **What specific aspect of your child's health would you like to understand better?**"""
668
+
669
+ # ===== APOLOGIES & POLITENESS =====
670
+ polite_patterns = [
671
+ ['sorry', 'excuse me', 'pardon me', 'my apologies', 'please help', 'could you please', 'would you mind', 'if you don\'t mind', 'sorry to bother you']
672
+ ]
673
+
674
+ for pattern_group in polite_patterns:
675
+ if any(keyword in query_lower for keyword in pattern_group):
676
+ return "๐Ÿ˜Š No need to apologize! I'm here to help with medical questions. Please feel free to ask anything about pediatric health - that's exactly what I'm designed for. What can I help you with?"
677
+
678
+ # ===== TESTING & VERIFICATION =====
679
+ test_patterns = [
680
+ ['test', 'testing', 'hello world', 'can you hear me', 'are you working', 'do you work', 'are you there', 'are you online', 'check', 'verify', 'ping']
681
+ ]
682
+
683
+ for pattern_group in test_patterns:
684
+ if any(keyword in query_lower for keyword in pattern_group):
685
+ return "โœ… **System Check:** I'm working perfectly and ready to assist! BioGPT medical AI is online and functioning optimally. Ready to help with pediatric medical questions. What would you like to know?"
686
+
687
+ # Return None if no conversational pattern matches
688
+ return None
689
+
690
+ def chat(self, query: str) -> str:
691
+ """Main chat function with BioGPT and comprehensive conversational handling"""
692
+ if not query.strip():
693
+ return "Hello! I'm BioGPT, your professional medical AI assistant. How can I help you with pediatric medical questions today?"
694
+
695
+ # Handle comprehensive conversational interactions first
696
+ conversational_response = self.handle_conversational_interactions(query)
697
+ if conversational_response:
698
+ # Add to conversation history
699
+ self.conversation_history.append({
700
+ 'query': query,
701
+ 'response': conversational_response,
702
+ 'timestamp': datetime.now().isoformat(),
703
+ 'type': 'conversational'
704
+ })
705
+ return conversational_response
706
+
707
+ if not self.knowledge_chunks:
708
+ return "Please load medical data first to access the medical knowledge base."
709
+
710
+ print(f"๐Ÿ” Processing medical query: {query}")
711
+
712
+ # Retrieve relevant medical context using FAISS or keyword search
713
+ start_time = time.time()
714
+ context = self.retrieve_medical_context(query)
715
+ retrieval_time = time.time() - start_time
716
+
717
+ if not context:
718
+ return "I don't have specific information about this topic in my medical database. Please consult with a healthcare professional for personalized medical advice."
719
+
720
+ print(f" ๐Ÿ“š Context retrieved ({retrieval_time:.2f}s)")
721
+
722
+ # Generate response with BioGPT
723
+ start_time = time.time()
724
+ main_context = '\n\n'.join(context)
725
+ response = self.generate_biogpt_response(main_context, query)
726
+ generation_time = time.time() - start_time
727
+
728
+ print(f" ๐Ÿง  Response generated ({generation_time:.2f}s)")
729
+
730
+ # Format final response
731
+ final_response = f"๐Ÿฉบ **Medical Information:** {response}\n\nโš ๏ธ **Important:** This information is for educational purposes only. Always consult with qualified healthcare professionals for medical diagnosis, treatment, and personalized advice."
732
+
733
+ # Add to conversation history
734
+ self.conversation_history.append({
735
+ 'query': query,
736
+ 'response': final_response,
737
+ 'timestamp': datetime.now().isoformat(),
738
+ 'retrieval_time': retrieval_time,
739
+ 'generation_time': generation_time,
740
+ 'type': 'medical'
741
+ })
742
+
743
+ return final_response
744
+
745
+ def get_conversation_summary(self) -> Dict:
746
+ """Get conversation statistics"""
747
+ if not self.conversation_history:
748
+ return {"message": "No conversations yet"}
749
+
750
+ # Filter medical conversations for performance stats
751
+ medical_conversations = [h for h in self.conversation_history if h.get('type') == 'medical']
752
+
753
+ if not medical_conversations:
754
+ return {
755
+ "total_conversations": len(self.conversation_history),
756
+ "medical_conversations": 0,
757
+ "conversational_interactions": len(self.conversation_history),
758
+ "model_info": "BioGPT-Large" if "BioGPT" in str(self.model) else "Fallback Model",
759
+ "vector_search": "FAISS CPU" if self.faiss_ready else "Keyword Search",
760
+ "device": self.device
761
+ }
762
+
763
+ avg_retrieval_time = sum(h.get('retrieval_time', 0) for h in medical_conversations) / len(medical_conversations)
764
+ avg_generation_time = sum(h.get('generation_time', 0) for h in medical_conversations) / len(medical_conversations)
765
+
766
+ return {
767
+ "total_conversations": len(self.conversation_history),
768
+ "medical_conversations": len(medical_conversations),
769
+ "conversational_interactions": len(self.conversation_history) - len(medical_conversations),
770
+ "avg_retrieval_time": f"{avg_retrieval_time:.2f}s",
771
+ "avg_generation_time": f"{avg_generation_time:.2f}s",
772
+ "model_info": "BioGPT-Large" if "BioGPT" in str(self.model) else "Fallback Model",
773
+ "vector_search": "FAISS CPU" if self.faiss_ready else "Keyword Search",
774
+ "device": self.device,
775
+ "quantization": "8-bit" if self.use_8bit else "16-bit/32-bit"
776
+ }
777
+
778
+ # Create and Test BioGPT Chatbot
779
+
780
+ def create_biogpt_chatbot():
781
+ """Create and initialize the BioGPT chatbot"""
782
+ print("๐Ÿš€ Creating Professional BioGPT Medical Chatbot")
783
+ print("=" * 60)
784
+
785
+ # Create chatbot
786
+ chatbot = ColabBioGPTChatbot(use_gpu=True, use_8bit=True)
787
+
788
+ return chatbot
789
+
790
+ def test_biogpt_chatbot(chatbot, test_file='Pediatric_cleaned.txt'):
791
+ """Test the BioGPT chatbot"""
792
+ print("\n๐Ÿ“š Loading medical data...")
793
+ success = chatbot.load_medical_data(test_file)
794
+
795
+ if not success:
796
+ print("โŒ Failed to load medical data. Please check the file.")
797
+ return None
798
+
799
+ print("\n๐Ÿงช Testing BioGPT Medical Chatbot:")
800
+ print("=" * 50)
801
+
802
+ # Test queries
803
+ test_queries = [
804
+ "What causes fever in children?",
805
+ "How should I treat my child's cough?",
806
+ "When should I be concerned about my baby's breathing?",
807
+ "What are the signs of dehydration in infants?"
808
+ ]
809
+
810
+ for i, query in enumerate(test_queries, 1):
811
+ print(f"\n{i}๏ธโƒฃ Testing: {query}")
812
+ print("-" * 40)
813
+
814
+ response = chatbot.chat(query)
815
+ print(f"๐Ÿค– BioGPT Response:\n{response}")
816
+ print("=" * 50)
817
+
818
+ # Show conversation summary
819
+ summary = chatbot.get_conversation_summary()
820
+ print("\n๐Ÿ“Š Performance Summary:")
821
+ for key, value in summary.items():
822
+ print(f" {key}: {value}")
823
+
824
+ return chatbot
825
+
826
+ # Interactive Chat Interface
827
+
828
+ def interactive_biogpt_chat(chatbot):
829
+ """Interactive chat with BioGPT"""
830
+ print("\n๐Ÿ’ฌ Interactive BioGPT Medical Chat")
831
+ print("=" * 50)
832
+ print("You're now chatting with BioGPT, a professional medical AI!")
833
+ print("Type 'quit' to exit, 'summary' to see stats")
834
+ print("-" * 50)
835
+
836
+ while True:
837
+ user_input = input("\n๐Ÿ‘ค You: ").strip()
838
+
839
+ if user_input.lower() in ['quit', 'exit', 'bye']:
840
+ print("\n๐Ÿ‘‹ Thank you for using BioGPT Medical Assistant!")
841
+ # Show final summary
842
+ summary = chatbot.get_conversation_summary()
843
+ print("\n๐Ÿ“Š Final Session Summary:")
844
+ for key, value in summary.items():
845
+ print(f" {key}: {value}")
846
+ break
847
+
848
+ elif user_input.lower() == 'summary':
849
+ summary = chatbot.get_conversation_summary()
850
+ print("\n๐Ÿ“Š Current Session Summary:")
851
+ for key, value in summary.items():
852
+ print(f" {key}: {value}")
853
+ continue
854
+
855
+ elif not user_input:
856
+ continue
857
+
858
+ print(f"\n๐Ÿค– BioGPT: ", end="")
859
+ response = chatbot.chat(user_input)
860
+ print(response)
861
+
862
+ # Main Execution
863
+
864
+ # Create the BioGPT chatbot
865
+ chatbot = create_biogpt_chatbot()
866
+
867
+ print("\n" + "="*60)
868
+ print("๐ŸŽฏ NEXT STEPS:")
869
+ print("1. Upload your medical data file by running: upload_medical_data()")
870
+ print("2. Test the chatbot: test_biogpt_chatbot(chatbot)")
871
+ print("3. Start interactive chat: interactive_biogpt_chat(chatbot)")
872
+ print("="*60)
873
+
874
+ medical_file = upload_medical_data() # Upload your file
875
+ chatbot = test_biogpt_chatbot(chatbot) # Test the chatbot
876
+ interactive_biogpt_chat(chatbot) # Start interactive chat
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=4.0.0
2
+ torch>=1.12.0
3
+ transformers>=4.21.0
4
+ sentence-transformers>=2.2.0
5
+ accelerate>=0.20.0
6
+ bitsandbytes>=0.39.0
7
+ numpy>=1.21.0
8
+ sacremoses>=0.0.43
9
+ protobuf>=3.20.0
10
+ tokenizers>=0.13.0
11
+ huggingface-hub>=0.16.0
12
+ faiss-cpu>=1.7.0