Nipun Claude commited on
Commit
bb0db22
·
1 Parent(s): 219bd2a

Redesign UI for cleaner academic interface and remove pandasai dependency

Browse files

Major UI improvements:
- Set DeepSeek-R1 as default model
- Remove all decorative emojis for academic look
- Redesign header with logo, subtitle, and model selector
- Clean sidebar with dataset info and quick queries
- Improve message styling with better alignment and spacing
- Add collapsible code containers with clean styling

Technical improvements:
- Remove unused pandasai dependency and functions
- Clean up imports and unused code
- Add CLAUDE.md documentation for future development

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

Files changed (4) hide show
  1. CLAUDE.md +87 -0
  2. app.py +259 -192
  3. requirements.txt +0 -1
  4. src.py +13 -162
CLAUDE.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ VayuChat is a Streamlit-based conversational AI application for air quality data analysis. It provides an interactive chat interface where users can ask questions about PM2.5 and PM10 pollution data through natural language, and receive responses including visualizations and data insights.
8
+
9
+ ## Architecture
10
+
11
+ The application follows a two-file architecture:
12
+
13
+ - **app.py**: Main Streamlit application with UI components, chat interface, and user interaction handling
14
+ - **src.py**: Core data processing logic, LLM integration, and code generation/execution engine
15
+
16
+ Key architectural patterns:
17
+ - **Code Generation Pipeline**: User questions are converted to executable Python code via LLM prompting, then executed dynamically
18
+ - **Multi-LLM Support**: Supports both Groq (LLaMA models) and Google Gemini models through LangChain
19
+ - **Session Management**: Uses Streamlit session state for chat history and user interactions
20
+ - **Feedback Loop**: Comprehensive logging and feedback collection to HuggingFace datasets
21
+
22
+ ## Development Commands
23
+
24
+ ### Run the Application
25
+ ```bash
26
+ streamlit run app.py
27
+ ```
28
+
29
+ ### Install Dependencies
30
+ ```bash
31
+ pip install -r requirements.txt
32
+ ```
33
+
34
+ ### Environment Setup
35
+ Create a `.env` file with the following variables:
36
+ ```bash
37
+ GROQ_API_KEY=your_groq_api_key_here
38
+ GEMINI_TOKEN=your_google_gemini_api_key_here
39
+ HF_TOKEN=your_huggingface_token_here # Optional, for logging
40
+ ```
41
+
42
+ ## Data Requirements
43
+
44
+ - **Data.csv**: Must contain columns: `Timestamp`, `station`, `PM2.5`, `PM10`, `address`, `city`, `latitude`, `longitude`, `state`
45
+ - **IITGN_Logo.png**: Logo image for the sidebar
46
+ - **questions.txt**: Pre-defined quick prompt questions (optional)
47
+ - **system_prompt.txt**: Contains specific instructions for the LLM code generation
48
+
49
+ ## Code Generation System
50
+
51
+ The application uses a unique code generation approach in `src.py`:
52
+
53
+ 1. **Template-based Code Generation**: User questions are embedded into a Python code template that includes data loading and analysis patterns
54
+ 2. **Dynamic Execution**: Generated code is executed in a controlled environment with pandas, matplotlib, and other libraries available
55
+ 3. **Result Handling**: Results are stored in an `answer` variable and can be either text/numbers or plot file paths
56
+ 4. **Error Recovery**: Comprehensive error handling with logging to HuggingFace datasets
57
+
58
+ ## Key Functions (src.py)
59
+
60
+ - `ask_question()`: Main entry point for processing user queries
61
+ - `preprocess_and_load_df()`: Data loading and preprocessing
62
+ - `load_agent()` / `load_smart_df()`: LLM agent initialization
63
+ - `log_interaction()`: Interaction logging to HuggingFace
64
+ - `upload_feedback()`: User feedback collection (in app.py)
65
+
66
+ ## Model Configuration
67
+
68
+ Available models are defined in both files:
69
+ - Groq models: LLaMA 3.1, LLaMA 3.3, LLaMA 4 variants, DeepSeek-R1, GPT-OSS
70
+ - Google models: Gemini 1.5 Pro
71
+
72
+ ## Plotting Guidelines
73
+
74
+ When generating visualization code, the system follows specific guidelines from `system_prompt.txt`:
75
+ - Include India (60 µg/m³) and WHO (15 µg/m³) guidelines for PM2.5
76
+ - Include India (100 µg/m³) and WHO (50 µg/m³) guidelines for PM10
77
+ - Use tight layout and 45-degree rotated x-axis labels
78
+ - Save plots with unique filenames using UUID
79
+ - Use 'Reds' colormap for air quality visualizations
80
+ - Round floating point numbers to 2 decimal places
81
+ - Always report units (µg/m³) and include standard deviation/error for aggregations
82
+
83
+ ## Logging and Feedback
84
+
85
+ - All interactions are logged to `SustainabilityLabIITGN/VayuChat_logs` HuggingFace dataset
86
+ - User feedback is collected and stored in `SustainabilityLabIITGN/VayuChat_Feedback` dataset
87
+ - Session tracking via UUID for analytics
app.py CHANGED
@@ -7,12 +7,7 @@ from os.path import join
7
  from datetime import datetime
8
  from src import (
9
  preprocess_and_load_df,
10
- load_agent,
11
- ask_agent,
12
- decorate_with_code,
13
- show_response,
14
  get_from_user,
15
- load_smart_df,
16
  ask_question,
17
  )
18
  from dotenv import load_dotenv
@@ -28,7 +23,7 @@ import uuid
28
  # Page config with beautiful theme
29
  st.set_page_config(
30
  page_title="VayuChat - AI Air Quality Assistant",
31
- page_icon="🌬️",
32
  layout="wide",
33
  initial_sidebar_state="expanded"
34
  )
@@ -109,42 +104,41 @@ st.markdown("""
109
 
110
  /* User message styling */
111
  .user-message {
112
- background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
113
  color: white;
114
- padding: 15px 20px;
115
- border-radius: 20px 20px 5px 20px;
116
- margin: 10px 0;
117
  margin-left: auto;
118
  margin-right: 0;
119
- max-width: 80%;
120
- position: relative;
121
- box-shadow: 0 2px 10px rgba(0,0,0,0.1);
122
  }
123
 
124
  .user-info {
125
- font-size: 0.8rem;
126
- opacity: 0.8;
127
  margin-bottom: 5px;
128
- text-align: right;
129
  }
130
 
131
  /* Assistant message styling */
132
  .assistant-message {
133
- background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%);
134
- color: white;
135
- padding: 15px 20px;
136
- border-radius: 20px 20px 20px 5px;
137
- margin: 10px 0;
138
  margin-left: 0;
139
  margin-right: auto;
140
- max-width: 80%;
141
- position: relative;
142
- box-shadow: 0 2px 10px rgba(0,0,0,0.1);
143
  }
144
 
145
  .assistant-info {
146
- font-size: 0.8rem;
147
- opacity: 0.8;
148
  margin-bottom: 5px;
149
  }
150
 
@@ -217,13 +211,80 @@ st.markdown("""
217
  background-color: #0b5ed7;
218
  }
219
 
220
- /* Code details styling */
221
- .code-details {
222
- background-color: #f8f9fa;
223
- border: 1px solid #dee2e6;
224
  border-radius: 8px;
225
- padding: 10px;
226
- margin-top: 10px;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
227
  }
228
 
229
  /* Hide default menu and footer */
@@ -239,7 +300,7 @@ header {visibility: hidden;}
239
  </style>
240
  """, unsafe_allow_html=True)
241
 
242
- # Auto-scroll JavaScript
243
  st.markdown("""
244
  <script>
245
  function scrollToBottom() {
@@ -251,6 +312,19 @@ function scrollToBottom() {
251
  window.scrollTo(0, document.body.scrollHeight);
252
  }, 100);
253
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
254
  </script>
255
  """, unsafe_allow_html=True)
256
 
@@ -283,7 +357,7 @@ def upload_feedback(feedback, error, output, last_prompt, code, status):
283
  """Enhanced feedback upload function with better logging and error handling"""
284
  try:
285
  if not hf_token or hf_token.strip() == "":
286
- st.warning("⚠️ Cannot upload feedback - HF_TOKEN not available")
287
  return False
288
 
289
  # Create comprehensive feedback data
@@ -368,187 +442,160 @@ def upload_feedback(feedback, error, output, last_prompt, code, status):
368
  if os.path.exists(markdown_local_path):
369
  os.remove(markdown_local_path)
370
 
371
- st.success("🎉 Feedback uploaded successfully!")
372
  return True
373
 
374
  except Exception as e:
375
- st.error(f"Error uploading feedback: {e}")
376
  print(f"Feedback upload error: {e}")
377
  return False
378
 
379
- # Beautiful header
380
- st.markdown("<h1 class='main-title'>🌬️ VayuChat</h1>", unsafe_allow_html=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
381
 
382
- st.markdown("""
383
- <div class='subtitle'>
384
- <strong>AI-Powered Air Quality Insights</strong><br>
385
- Simplifying pollution analysis using conversational AI.
386
- </div>
387
- """, unsafe_allow_html=True)
 
 
 
 
 
 
 
 
 
 
 
 
388
 
389
- st.markdown("""
390
- <div class='instructions'>
391
- <strong>How to Use:</strong><br>
392
- Select a model from the sidebar and ask questions directly in the chat. Use quick prompts below for common queries.
393
- </div>
394
- """, unsafe_allow_html=True)
 
 
 
 
395
 
396
- os.environ["PANDASAI_API_KEY"] = "$2a$10$gbmqKotzJOnqa7iYOun8eO50TxMD/6Zw1pLI2JEoqncwsNx4XeBS2"
397
 
398
  # Load data with error handling
399
  try:
400
  df = preprocess_and_load_df(join(self_path, "Data.csv"))
401
- st.success("Data loaded successfully!")
402
  except Exception as e:
403
- st.error(f"Error loading data: {e}")
404
  st.stop()
405
 
406
  inference_server = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2"
407
  image_path = "IITGN_Logo.png"
408
 
409
- # Beautiful sidebar
410
  with st.sidebar:
411
- # Logo and title
412
- col1, col2, col3 = st.columns([1, 2, 1])
413
- with col2:
414
- if os.path.exists(image_path):
415
- st.image(image_path, use_column_width=True)
416
-
417
- # Session info
418
- st.markdown(f"**Session ID**: `{st.session_state.session_id[:8]}...`")
419
-
420
- # Model selection
421
- st.markdown("### 🤖 AI Model Selection")
422
-
423
- # Filter available models
424
- available_models = []
425
- model_names = list(models.keys())
426
- groq_models = []
427
- gemini_models = []
428
- for model_name in model_names:
429
- if "gemini" not in model_name:
430
- groq_models.append(model_name)
431
- else:
432
- gemini_models.append(model_name)
433
- if Groq_Token and Groq_Token.strip():
434
- available_models.extend(groq_models)
435
- if gemini_token and gemini_token.strip():
436
- available_models.extend(gemini_models)
437
 
438
- if not available_models:
439
- st.error(" No API keys available! Please set up your API keys in the .env file")
440
- st.stop()
441
-
442
- model_name = st.selectbox(
443
- "Choose your AI assistant:",
444
- available_models,
445
- help="Different models have different strengths. Try them all!"
446
- )
447
 
448
- # Model descriptions
449
  model_descriptions = {
450
- "llama3.1": "🦙 Fast and efficient for general queries",
451
- "llama3.3": "🦙 Most advanced LLaMA model for complex reasoning",
452
- "mistral": "Balanced performance and speed",
453
- "gemma": "💎 Google's lightweight model",
454
- "gemini-pro": "🧠 Google's most powerful model",
455
- "gpt-oss-20b": "📘 OpenAI's compact open-weight GPT for everyday tasks",
456
- "gpt-oss-120b": "📚 OpenAI's massive open-weight GPT for nuanced responses",
457
- "deepseek-R1": "🔍 DeepSeek's distilled LLaMA model for efficient reasoning",
458
- "llama4 maverik": "🚀 Meta's LLaMA 4 Maverick — high-performance instruction model",
459
- "llama4 scout": "🛰️ Meta's LLaMA 4 Scout — optimized for adaptive reasoning"
460
  }
461
-
462
 
463
  if model_name in model_descriptions:
464
- st.info(model_descriptions[model_name])
465
 
466
  st.markdown("---")
467
 
468
- # Logging status
469
- st.markdown("### 📊 Logging Status")
470
- if hf_token and hf_token.strip():
471
- st.success("✅ Logging enabled")
472
- st.caption("Interactions are being logged to HuggingFace")
473
- else:
474
- st.warning("⚠️ Logging disabled")
475
- st.caption("HF_TOKEN not available")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
476
 
477
  st.markdown("---")
478
 
479
  # Clear Chat Button
480
- if st.button("🧹 Clear Chat"):
481
  st.session_state.responses = []
482
  st.session_state.processing = False
483
- # Generate new session ID for new chat
484
  st.session_state.session_id = str(uuid.uuid4())
485
  try:
486
  st.rerun()
487
  except AttributeError:
488
  st.experimental_rerun()
489
-
490
- st.markdown("---")
491
-
492
- # Chat History in Sidebar
493
- with st.expander("📜 Chat History"):
494
- for i, response in enumerate(st.session_state.get("responses", [])):
495
- if response.get("role") == "user":
496
- st.markdown(f"**You:** {response.get('content', '')[:50]}...")
497
- elif response.get("role") == "assistant":
498
- content = response.get('content', '')
499
- if isinstance(content, str) and len(content) > 50:
500
- st.markdown(f"**VayuChat:** {content[:50]}...")
501
- else:
502
- st.markdown(f"**VayuChat:** {str(content)[:50]}...")
503
- st.markdown("---")
504
-
505
- # Load quick prompts
506
- questions = []
507
- questions_file = join(self_path, "questions.txt")
508
- if os.path.exists(questions_file):
509
- try:
510
- with open(questions_file, 'r', encoding='utf-8') as f:
511
- content = f.read()
512
- questions = [q.strip() for q in content.split("\n") if q.strip()]
513
- print(f"Loaded {len(questions)} quick prompts") # Debug
514
- except Exception as e:
515
- st.error(f"Error loading questions: {e}")
516
- questions = []
517
-
518
- # Add some default prompts if file doesn't exist or is empty
519
- if not questions:
520
- questions = [
521
- "What is the average PM2.5 level in the dataset?",
522
- "Show me the air quality trend over time",
523
- "Which pollutant has the highest concentration?",
524
- "Create a correlation plot between different pollutants",
525
- "What are the peak pollution hours?",
526
- "Compare weekday vs weekend pollution levels"
527
- ]
528
-
529
- # Quick prompts section (horizontal)
530
- st.markdown("### 💭 Quick Prompts")
531
-
532
- # Create columns for horizontal layout
533
- cols_per_row = 2 # Reduced to 2 for better fit
534
- rows = [questions[i:i + cols_per_row] for i in range(0, len(questions), cols_per_row)]
535
-
536
- selected_prompt = None
537
- for row_idx, row in enumerate(rows):
538
- cols = st.columns(len(row))
539
- for col_idx, question in enumerate(row):
540
- with cols[col_idx]:
541
- # Create unique key using row and column indices
542
- unique_key = f"prompt_btn_{row_idx}_{col_idx}"
543
- button_text = f"📝 {question[:35]}{'...' if len(question) > 35 else ''}"
544
-
545
- if st.button(button_text,
546
- key=unique_key,
547
- help=question,
548
- use_container_width=True):
549
- selected_prompt = question
550
 
551
- st.markdown("---")
552
 
553
  # Initialize chat history and processing state
554
  if "responses" not in st.session_state:
@@ -557,35 +604,55 @@ if "processing" not in st.session_state:
557
  st.session_state.processing = False
558
 
559
  def show_custom_response(response):
560
- """Custom response display function"""
561
  role = response.get("role", "assistant")
562
  content = response.get("content", "")
563
 
564
  if role == "user":
 
565
  st.markdown(f"""
566
- <div class='user-message'>
567
- <div class='user-info'>You</div>
568
- {content}
 
569
  </div>
570
  """, unsafe_allow_html=True)
571
  elif role == "assistant":
 
572
  st.markdown(f"""
573
- <div class='assistant-message'>
574
- <div class='assistant-info'>🤖 VayuChat</div>
575
- {content if isinstance(content, str) else str(content)}
 
 
576
  </div>
577
  """, unsafe_allow_html=True)
578
 
579
- # Show generated code if available
580
  if response.get("gen_code"):
581
- with st.expander("📋 View Generated Code"):
582
- st.code(response["gen_code"], language="python")
 
 
 
 
 
 
 
 
 
 
583
 
584
  # Try to display image if content is a file path
585
  try:
586
  if isinstance(content, str) and (content.endswith('.png') or content.endswith('.jpg')):
587
  if os.path.exists(content):
 
 
 
 
588
  st.image(content)
 
589
  return {"is_image": True}
590
  except:
591
  pass
@@ -596,9 +663,9 @@ def show_processing_indicator(model_name, question):
596
  """Show processing indicator"""
597
  st.markdown(f"""
598
  <div class='processing-indicator'>
599
- <div class='assistant-info'>🤖 VayuChat • Processing with {model_name}</div>
600
  <strong>Question:</strong> {question}<br>
601
- <em>🔄 Generating response...</em>
602
  </div>
603
  """, unsafe_allow_html=True)
604
 
@@ -622,7 +689,7 @@ with chat_container:
622
  feedback_data = st.session_state.responses[response_id]["feedback"]
623
  st.markdown(f"""
624
  <div class='feedback-section'>
625
- <strong>📝 Your Feedback:</strong> {feedback_data.get('score', '')}
626
  {f"- {feedback_data.get('text', '')}" if feedback_data.get('text') else ""}
627
  </div>
628
  """, unsafe_allow_html=True)
@@ -640,13 +707,13 @@ with chat_container:
640
  if thumbs_up or thumbs_down:
641
  thumbs = "👍 Helpful" if thumbs_up else "👎 Not Helpful"
642
  comments = st.text_area(
643
- "💬 Tell us more (optional):",
644
  key=f"{feedback_key}_comments",
645
  placeholder="What could be improved? Any suggestions?",
646
  max_chars=500
647
  )
648
 
649
- if st.button("🚀 Submit Feedback", key=f"{feedback_key}_submit"):
650
  feedback = {"score": thumbs, "text": comments}
651
 
652
  # Upload feedback with enhanced error handling
@@ -665,7 +732,7 @@ with chat_container:
665
  )
666
 
667
  # Chat input (always visible at bottom)
668
- prompt = st.chat_input("💬 Ask me anything about air quality!", key="main_chat")
669
 
670
  # Handle selected prompt from quick prompts
671
  if selected_prompt:
@@ -704,7 +771,7 @@ if st.session_state.get("processing"):
704
  if not isinstance(response, dict):
705
  response = {
706
  "role": "assistant",
707
- "content": "Error: Invalid response format",
708
  "gen_code": "",
709
  "ex_code": "",
710
  "last_prompt": prompt,
@@ -767,7 +834,7 @@ if st.session_state.responses:
767
  # Footer
768
  st.markdown("""
769
  <div style='text-align: center; margin-top: 3rem; padding: 2rem; background: rgba(255,255,255,0.1); border-radius: 15px;'>
770
- <h3>🌍 Together for Cleaner Air</h3>
771
  <p>VayuChat - Empowering environmental awareness through AI</p>
772
  <small>© 2024 IIT Gandhinagar Sustainability Lab</small>
773
  </div>
 
7
  from datetime import datetime
8
  from src import (
9
  preprocess_and_load_df,
 
 
 
 
10
  get_from_user,
 
11
  ask_question,
12
  )
13
  from dotenv import load_dotenv
 
23
  # Page config with beautiful theme
24
  st.set_page_config(
25
  page_title="VayuChat - AI Air Quality Assistant",
26
+ page_icon="V",
27
  layout="wide",
28
  initial_sidebar_state="expanded"
29
  )
 
104
 
105
  /* User message styling */
106
  .user-message {
107
+ background: #3b82f6;
108
  color: white;
109
+ padding: 1rem 1.5rem;
110
+ border-radius: 12px;
111
+ margin: 2rem 0;
112
  margin-left: auto;
113
  margin-right: 0;
114
+ max-width: 70%;
115
+ display: flex;
116
+ justify-content: flex-end;
117
  }
118
 
119
  .user-info {
120
+ font-size: 0.875rem;
121
+ opacity: 0.9;
122
  margin-bottom: 5px;
 
123
  }
124
 
125
  /* Assistant message styling */
126
  .assistant-message {
127
+ background: #f1f5f9;
128
+ color: #334155;
129
+ padding: 1rem 1.5rem;
130
+ border-radius: 12px;
131
+ margin: 2rem 0;
132
  margin-left: 0;
133
  margin-right: auto;
134
+ max-width: 70%;
135
+ display: flex;
136
+ justify-content: flex-start;
137
  }
138
 
139
  .assistant-info {
140
+ font-size: 0.875rem;
141
+ color: #6b7280;
142
  margin-bottom: 5px;
143
  }
144
 
 
211
  background-color: #0b5ed7;
212
  }
213
 
214
+ /* Code container styling */
215
+ .code-container {
216
+ margin: 1rem 0;
217
+ border: 1px solid #e2e8f0;
218
  border-radius: 8px;
219
+ background: #f8fafc;
220
+ }
221
+
222
+ .code-header {
223
+ display: flex;
224
+ justify-content: space-between;
225
+ align-items: center;
226
+ padding: 0.75rem 1rem;
227
+ background: #f1f5f9;
228
+ border-bottom: 1px solid #e2e8f0;
229
+ cursor: pointer;
230
+ transition: background-color 0.2s;
231
+ }
232
+
233
+ .code-header:hover {
234
+ background: #e2e8f0;
235
+ }
236
+
237
+ .code-title {
238
+ font-size: 0.875rem;
239
+ font-weight: 500;
240
+ color: #374151;
241
+ }
242
+
243
+ .toggle-text {
244
+ font-size: 0.75rem;
245
+ color: #6b7280;
246
+ }
247
+
248
+ .code-block {
249
+ background: #1e293b;
250
+ color: #e2e8f0;
251
+ padding: 1rem;
252
+ font-family: 'Monaco', 'Menlo', monospace;
253
+ font-size: 0.875rem;
254
+ overflow-x: auto;
255
+ line-height: 1.5;
256
+ }
257
+
258
+ .answer-container {
259
+ background: #f8fafc;
260
+ border: 1px solid #e2e8f0;
261
+ border-radius: 8px;
262
+ padding: 1.5rem;
263
+ margin: 1rem 0;
264
+ }
265
+
266
+ .answer-text {
267
+ font-size: 1.125rem;
268
+ color: #1e293b;
269
+ line-height: 1.6;
270
+ margin-bottom: 1rem;
271
+ }
272
+
273
+ .answer-highlight {
274
+ background: #fef3c7;
275
+ padding: 0.125rem 0.375rem;
276
+ border-radius: 4px;
277
+ font-weight: 600;
278
+ color: #92400e;
279
+ }
280
+
281
+ .context-info {
282
+ background: #f1f5f9;
283
+ border-left: 4px solid #3b82f6;
284
+ padding: 0.75rem 1rem;
285
+ margin: 1rem 0;
286
+ font-size: 0.875rem;
287
+ color: #475569;
288
  }
289
 
290
  /* Hide default menu and footer */
 
300
  </style>
301
  """, unsafe_allow_html=True)
302
 
303
+ # JavaScript for interactions
304
  st.markdown("""
305
  <script>
306
  function scrollToBottom() {
 
312
  window.scrollTo(0, document.body.scrollHeight);
313
  }, 100);
314
  }
315
+
316
+ function toggleCode(header) {
317
+ const codeBlock = header.nextElementSibling;
318
+ const toggleText = header.querySelector('.toggle-text');
319
+
320
+ if (codeBlock.style.display === 'none') {
321
+ codeBlock.style.display = 'block';
322
+ toggleText.textContent = 'Click to collapse';
323
+ } else {
324
+ codeBlock.style.display = 'none';
325
+ toggleText.textContent = 'Click to expand';
326
+ }
327
+ }
328
  </script>
329
  """, unsafe_allow_html=True)
330
 
 
357
  """Enhanced feedback upload function with better logging and error handling"""
358
  try:
359
  if not hf_token or hf_token.strip() == "":
360
+ st.warning("Cannot upload feedback - HF_TOKEN not available")
361
  return False
362
 
363
  # Create comprehensive feedback data
 
442
  if os.path.exists(markdown_local_path):
443
  os.remove(markdown_local_path)
444
 
445
+ st.success("Feedback uploaded successfully!")
446
  return True
447
 
448
  except Exception as e:
449
+ st.error(f"Error uploading feedback: {e}")
450
  print(f"Feedback upload error: {e}")
451
  return False
452
 
453
+ # Filter available models
454
+ available_models = []
455
+ model_names = list(models.keys())
456
+ groq_models = []
457
+ gemini_models = []
458
+ for model_name in model_names:
459
+ if "gemini" not in model_name:
460
+ groq_models.append(model_name)
461
+ else:
462
+ gemini_models.append(model_name)
463
+ if Groq_Token and Groq_Token.strip():
464
+ available_models.extend(groq_models)
465
+ if gemini_token and gemini_token.strip():
466
+ available_models.extend(gemini_models)
467
+
468
+ if not available_models:
469
+ st.error("No API keys available! Please set up your API keys in the .env file")
470
+ st.stop()
471
 
472
+ # Set DeepSeek-R1 as default if available
473
+ default_index = 0
474
+ if "deepseek-R1" in available_models:
475
+ default_index = available_models.index("deepseek-R1")
476
+
477
+ # Header with logo, title and model selector
478
+ header_col1, header_col2 = st.columns([2, 1])
479
+
480
+ with header_col1:
481
+ st.markdown("""
482
+ <div style='display: flex; align-items: center; gap: 0.75rem; margin-bottom: 1rem;'>
483
+ <div style='width: 32px; height: 32px; background: #3b82f6; border-radius: 8px; display: flex; align-items: center; justify-content: center; color: white; font-weight: bold;'>V</div>
484
+ <div>
485
+ <h1 style='margin: 0; font-size: 1.25rem; font-weight: 600; color: #1e293b;'>VayuChat</h1>
486
+ <p style='margin: 0; font-size: 0.875rem; color: #64748b;'>Environmental Data Analysis</p>
487
+ </div>
488
+ </div>
489
+ """, unsafe_allow_html=True)
490
 
491
+ with header_col2:
492
+ model_name = st.selectbox(
493
+ "Model:",
494
+ available_models,
495
+ index=default_index,
496
+ help="Choose your AI model",
497
+ label_visibility="collapsed"
498
+ )
499
+
500
+ st.markdown("<hr style='margin: 1rem 0; border: none; border-top: 1px solid #e2e8f0;'>", unsafe_allow_html=True)
501
 
 
502
 
503
  # Load data with error handling
504
  try:
505
  df = preprocess_and_load_df(join(self_path, "Data.csv"))
506
+ st.success("Data loaded successfully!")
507
  except Exception as e:
508
+ st.error(f"Error loading data: {e}")
509
  st.stop()
510
 
511
  inference_server = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2"
512
  image_path = "IITGN_Logo.png"
513
 
514
+ # Clean sidebar
515
  with st.sidebar:
516
+ # Dataset Info Section
517
+ st.markdown("### Dataset Info")
518
+ st.markdown("""
519
+ <div style='background-color: #f1f5f9; padding: 1rem; border-radius: 8px; margin-bottom: 1.5rem;'>
520
+ <h4 style='margin: 0 0 0.5rem 0; color: #1e293b; font-size: 1rem;'>PM2.5 Air Quality Data</h4>
521
+ <p style='margin: 0.25rem 0; font-size: 0.875rem;'><strong>Time Range:</strong> Daily measurements</p>
522
+ <p style='margin: 0.25rem 0; font-size: 0.875rem;'><strong>Locations:</strong> Multiple cities in Gujarat</p>
523
+ <p style='margin: 0.25rem 0; font-size: 0.875rem;'><strong>Records:</strong> Air quality monitoring data</p>
524
+ <p style='margin: 0.25rem 0; font-size: 0.875rem;'><strong>Parameters:</strong> PM2.5, PM10, Location data</p>
525
+ </div>
526
+ """, unsafe_allow_html=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
527
 
528
+ # Current Model Info
529
+ st.markdown("### Current Model")
530
+ st.markdown(f"**{model_name}**")
 
 
 
 
 
 
531
 
 
532
  model_descriptions = {
533
+ "llama3.1": "Fast and efficient for general queries",
534
+ "llama3.3": "Most advanced LLaMA model for complex reasoning",
535
+ "mistral": "Balanced performance and speed",
536
+ "gemma": "Google's lightweight model",
537
+ "gemini-pro": "Google's most powerful model",
538
+ "gpt-oss-20b": "OpenAI's compact open-weight GPT for everyday tasks",
539
+ "gpt-oss-120b": "OpenAI's massive open-weight GPT for nuanced responses",
540
+ "deepseek-R1": "DeepSeek's distilled LLaMA model for efficient reasoning",
541
+ "llama4 maverik": "Meta's LLaMA 4 Maverick — high-performance instruction model",
542
+ "llama4 scout": "Meta's LLaMA 4 Scout — optimized for adaptive reasoning"
543
  }
 
544
 
545
  if model_name in model_descriptions:
546
+ st.caption(model_descriptions[model_name])
547
 
548
  st.markdown("---")
549
 
550
+ # Quick Queries Section
551
+ st.markdown("### Quick Queries")
552
+
553
+ # Load quick prompts
554
+ questions = []
555
+ questions_file = join(self_path, "questions.txt")
556
+ if os.path.exists(questions_file):
557
+ try:
558
+ with open(questions_file, 'r', encoding='utf-8') as f:
559
+ content = f.read()
560
+ questions = [q.strip() for q in content.split("\n") if q.strip()]
561
+ except Exception as e:
562
+ questions = []
563
+
564
+ # Add default prompts if file doesn't exist or is empty
565
+ if not questions:
566
+ questions = [
567
+ "Which month had highest pollution?",
568
+ "Which city has worst air quality?",
569
+ "Show annual PM2.5 average",
570
+ "Compare winter vs summer pollution",
571
+ "List all cities by pollution level",
572
+ "Plot monthly average PM2.5 for 2023"
573
+ ]
574
+
575
+ # Quick query buttons in sidebar
576
+ selected_prompt = None
577
+ for i, question in enumerate(questions[:6]): # Show only first 6
578
+ if st.button(
579
+ question,
580
+ key=f"sidebar_prompt_{i}",
581
+ help=question,
582
+ use_container_width=True
583
+ ):
584
+ selected_prompt = question
585
 
586
  st.markdown("---")
587
 
588
  # Clear Chat Button
589
+ if st.button("Clear Chat", use_container_width=True):
590
  st.session_state.responses = []
591
  st.session_state.processing = False
 
592
  st.session_state.session_id = str(uuid.uuid4())
593
  try:
594
  st.rerun()
595
  except AttributeError:
596
  st.experimental_rerun()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
597
 
598
+ # Main content area - removed quick prompts section from here as it's now in sidebar
599
 
600
  # Initialize chat history and processing state
601
  if "responses" not in st.session_state:
 
604
  st.session_state.processing = False
605
 
606
  def show_custom_response(response):
607
+ """Custom response display function with improved styling"""
608
  role = response.get("role", "assistant")
609
  content = response.get("content", "")
610
 
611
  if role == "user":
612
+ # User message with right alignment
613
  st.markdown(f"""
614
+ <div style='display: flex; justify-content: flex-end; margin: 2rem 0;'>
615
+ <div class='user-message'>
616
+ {content}
617
+ </div>
618
  </div>
619
  """, unsafe_allow_html=True)
620
  elif role == "assistant":
621
+ # Assistant message with left alignment
622
  st.markdown(f"""
623
+ <div style='display: flex; justify-content: flex-start; margin: 2rem 0;'>
624
+ <div class='assistant-message'>
625
+ <div class='assistant-info'>VayuChat</div>
626
+ {content if isinstance(content, str) else str(content)}
627
+ </div>
628
  </div>
629
  """, unsafe_allow_html=True)
630
 
631
+ # Show generated code with collapsible container
632
  if response.get("gen_code"):
633
+ st.markdown("""
634
+ <div class='code-container'>
635
+ <div class='code-header' onclick='toggleCode(this)'>
636
+ <div class='code-title'>Generated Python Code</div>
637
+ <div class='toggle-text'>Click to expand</div>
638
+ </div>
639
+ <div class='code-block' style='display: none;'>
640
+ """, unsafe_allow_html=True)
641
+
642
+ st.code(response["gen_code"], language="python")
643
+
644
+ st.markdown("</div></div>", unsafe_allow_html=True)
645
 
646
  # Try to display image if content is a file path
647
  try:
648
  if isinstance(content, str) and (content.endswith('.png') or content.endswith('.jpg')):
649
  if os.path.exists(content):
650
+ # Chart container styling
651
+ st.markdown("""
652
+ <div style='background: white; border: 1px solid #e2e8f0; border-radius: 8px; padding: 1.5rem; margin: 1rem 0;'>
653
+ """, unsafe_allow_html=True)
654
  st.image(content)
655
+ st.markdown("</div>", unsafe_allow_html=True)
656
  return {"is_image": True}
657
  except:
658
  pass
 
663
  """Show processing indicator"""
664
  st.markdown(f"""
665
  <div class='processing-indicator'>
666
+ <div class='assistant-info'>VayuChat • Processing with {model_name}</div>
667
  <strong>Question:</strong> {question}<br>
668
+ <em>Generating response...</em>
669
  </div>
670
  """, unsafe_allow_html=True)
671
 
 
689
  feedback_data = st.session_state.responses[response_id]["feedback"]
690
  st.markdown(f"""
691
  <div class='feedback-section'>
692
+ <strong>Your Feedback:</strong> {feedback_data.get('score', '')}
693
  {f"- {feedback_data.get('text', '')}" if feedback_data.get('text') else ""}
694
  </div>
695
  """, unsafe_allow_html=True)
 
707
  if thumbs_up or thumbs_down:
708
  thumbs = "👍 Helpful" if thumbs_up else "👎 Not Helpful"
709
  comments = st.text_area(
710
+ "Tell us more (optional):",
711
  key=f"{feedback_key}_comments",
712
  placeholder="What could be improved? Any suggestions?",
713
  max_chars=500
714
  )
715
 
716
+ if st.button("Submit Feedback", key=f"{feedback_key}_submit"):
717
  feedback = {"score": thumbs, "text": comments}
718
 
719
  # Upload feedback with enhanced error handling
 
732
  )
733
 
734
  # Chat input (always visible at bottom)
735
+ prompt = st.chat_input("Ask me anything about air quality!", key="main_chat")
736
 
737
  # Handle selected prompt from quick prompts
738
  if selected_prompt:
 
771
  if not isinstance(response, dict):
772
  response = {
773
  "role": "assistant",
774
+ "content": "Error: Invalid response format",
775
  "gen_code": "",
776
  "ex_code": "",
777
  "last_prompt": prompt,
 
834
  # Footer
835
  st.markdown("""
836
  <div style='text-align: center; margin-top: 3rem; padding: 2rem; background: rgba(255,255,255,0.1); border-radius: 15px;'>
837
+ <h3>Together for Cleaner Air</h3>
838
  <p>VayuChat - Empowering environmental awareness through AI</p>
839
  <small>© 2024 IIT Gandhinagar Sustainability Lab</small>
840
  </div>
requirements.txt CHANGED
@@ -3,7 +3,6 @@ streamlit==1.32.2
3
  pandas==1.5.3
4
  langchain==0.1.15
5
  python-dotenv==1.0.0
6
- pandasai==2.0.30
7
  geopandas
8
  plotly
9
  streamlit_feedback
 
3
  pandas==1.5.3
4
  langchain==0.1.15
5
  python-dotenv==1.0.0
 
6
  geopandas
7
  plotly
8
  streamlit_feedback
src.py CHANGED
@@ -1,9 +1,7 @@
1
  import os
2
  import pandas as pd
3
- from pandasai import Agent, SmartDataframe
4
  from typing import Tuple
5
  from PIL import Image
6
- from pandasai.llm import HuggingFaceTextGen
7
  from dotenv import load_dotenv
8
  from langchain_groq import ChatGroq
9
  from langchain_google_genai import ChatGoogleGenerativeAI
@@ -97,161 +95,14 @@ def preprocess_and_load_df(path: str) -> pd.DataFrame:
97
  except Exception as e:
98
  raise Exception(f"Error loading dataframe: {e}")
99
 
100
- def load_agent(df: pd.DataFrame, context: str, inference_server: str, name="mistral") -> Agent:
101
- """Load pandas AI agent with error handling"""
102
- try:
103
- if name == "gemini-pro":
104
- if not gemini_token or gemini_token.strip() == "":
105
- raise ValueError("Gemini API token not available or empty")
106
- llm = ChatGoogleGenerativeAI(
107
- model=models[name],
108
- google_api_key=gemini_token,
109
- temperature=0.1
110
- )
111
- else:
112
- if not Groq_Token or Groq_Token.strip() == "":
113
- raise ValueError("Groq API token not available or empty")
114
- llm = ChatGroq(
115
- model=models[name],
116
- api_key=Groq_Token,
117
- temperature=0.1
118
- )
119
-
120
- agent = Agent(df, config={"llm": llm, "enable_cache": False, "options": {"wait_for_model": True}})
121
- if context:
122
- agent.add_message(context)
123
- return agent
124
- except Exception as e:
125
- raise Exception(f"Error loading agent: {e}")
126
 
127
- def load_smart_df(df: pd.DataFrame, inference_server: str, name="mistral") -> SmartDataframe:
128
- """Load smart dataframe with error handling"""
129
- try:
130
- if name == "gemini-pro":
131
- if not gemini_token or gemini_token.strip() == "":
132
- raise ValueError("Gemini API token not available or empty")
133
- llm = ChatGoogleGenerativeAI(
134
- model=models[name],
135
- google_api_key=gemini_token,
136
- temperature=0.1
137
- )
138
- else:
139
- if not Groq_Token or Groq_Token.strip() == "":
140
- raise ValueError("Groq API token not available or empty")
141
- llm = ChatGroq(
142
- model=models[name],
143
- api_key=Groq_Token,
144
- temperature=0.1
145
- )
146
-
147
- df = SmartDataframe(df, config={"llm": llm, "max_retries": 5, "enable_cache": False})
148
- return df
149
- except Exception as e:
150
- raise Exception(f"Error loading smart dataframe: {e}")
151
 
152
  def get_from_user(prompt):
153
  """Format user prompt"""
154
  return {"role": "user", "content": prompt}
155
 
156
- def ask_agent(agent: Agent, prompt: str) -> dict:
157
- """Ask agent with comprehensive error handling"""
158
- start_time = datetime.now()
159
- try:
160
- response = agent.chat(prompt)
161
- execution_time = (datetime.now() - start_time).total_seconds()
162
-
163
- gen_code = getattr(agent, 'last_code_generated', '')
164
- ex_code = getattr(agent, 'last_code_executed', '')
165
- last_prompt = getattr(agent, 'last_prompt', prompt)
166
-
167
- # Log the interaction
168
- log_interaction(
169
- user_query=prompt,
170
- model_name="pandas_ai_agent",
171
- response_content=response,
172
- generated_code=gen_code,
173
- execution_time=execution_time,
174
- error_message=None,
175
- is_image=isinstance(response, str) and any(response.endswith(ext) for ext in ['.png', '.jpg', '.jpeg'])
176
- )
177
-
178
- return {
179
- "role": "assistant",
180
- "content": response,
181
- "gen_code": gen_code,
182
- "ex_code": ex_code,
183
- "last_prompt": last_prompt,
184
- "error": None
185
- }
186
- except Exception as e:
187
- execution_time = (datetime.now() - start_time).total_seconds()
188
- error_msg = str(e)
189
-
190
- # Log the failed interaction
191
- log_interaction(
192
- user_query=prompt,
193
- model_name="pandas_ai_agent",
194
- response_content=f"Error: {error_msg}",
195
- generated_code="",
196
- execution_time=execution_time,
197
- error_message=error_msg,
198
- is_image=False
199
- )
200
-
201
- return {
202
- "role": "assistant",
203
- "content": f"Error: {error_msg}",
204
- "gen_code": "",
205
- "ex_code": "",
206
- "last_prompt": prompt,
207
- "error": error_msg
208
- }
209
-
210
- def decorate_with_code(response: dict) -> str:
211
- """Decorate response with code details"""
212
- gen_code = response.get("gen_code", "No code generated")
213
- last_prompt = response.get("last_prompt", "No prompt")
214
-
215
- return f"""<details>
216
- <summary>Generated Code</summary>
217
-
218
- ```python
219
- {gen_code}
220
- ```
221
- </details>
222
-
223
- <details>
224
- <summary>Prompt</summary>
225
 
226
- {last_prompt}
227
- """
228
 
229
- def show_response(st, response):
230
- """Display response with error handling"""
231
- try:
232
- with st.chat_message(response["role"]):
233
- content = response.get("content", "No content")
234
-
235
- try:
236
- # Try to open as image
237
- image = Image.open(content)
238
- if response.get("gen_code"):
239
- st.markdown(decorate_with_code(response), unsafe_allow_html=True)
240
- st.image(image)
241
- return {"is_image": True}
242
- except:
243
- # Not an image, display as text
244
- if response.get("gen_code"):
245
- display_content = decorate_with_code(response) + f"""</details>
246
-
247
- {content}"""
248
- else:
249
- display_content = content
250
- st.markdown(display_content, unsafe_allow_html=True)
251
- return {"is_image": False}
252
- except Exception as e:
253
- st.error(f"Error displaying response: {e}")
254
- return {"is_image": False}
255
 
256
  def ask_question(model_name, question):
257
  """Ask question with comprehensive error handling and logging"""
@@ -274,7 +125,7 @@ def ask_question(model_name, question):
274
  log_interaction(
275
  user_query=question,
276
  model_name=model_name,
277
- response_content="Gemini API token not available or empty",
278
  generated_code="",
279
  execution_time=execution_time,
280
  error_message=error_msg,
@@ -283,7 +134,7 @@ def ask_question(model_name, question):
283
 
284
  return {
285
  "role": "assistant",
286
- "content": "Gemini API token not available or empty. Please set GEMINI_TOKEN in your environment variables.",
287
  "gen_code": "",
288
  "ex_code": "",
289
  "last_prompt": question,
@@ -303,7 +154,7 @@ def ask_question(model_name, question):
303
  log_interaction(
304
  user_query=question,
305
  model_name=model_name,
306
- response_content="Groq API token not available or empty",
307
  generated_code="",
308
  execution_time=execution_time,
309
  error_message=error_msg,
@@ -312,7 +163,7 @@ def ask_question(model_name, question):
312
 
313
  return {
314
  "role": "assistant",
315
- "content": "Groq API token not available or empty. Please set GROQ_API_KEY in your environment variables and restart the application.",
316
  "gen_code": "",
317
  "ex_code": "",
318
  "last_prompt": question,
@@ -334,10 +185,10 @@ def ask_question(model_name, question):
334
  error_msg = str(api_error)
335
 
336
  if "organization_restricted" in error_msg.lower() or "unauthorized" in error_msg.lower():
337
- response_content = "API Key Error: Your Groq API key appears to be invalid, expired, or restricted. Please check your API key in the .env file."
338
  log_error_msg = f"API key validation failed: {error_msg}"
339
  else:
340
- response_content = f"API Connection Error: {error_msg}"
341
  log_error_msg = error_msg
342
 
343
  # Log the failed interaction
@@ -369,7 +220,7 @@ def ask_question(model_name, question):
369
  log_interaction(
370
  user_query=question,
371
  model_name=model_name,
372
- response_content="Data.csv file not found",
373
  generated_code="",
374
  execution_time=execution_time,
375
  error_message=error_msg,
@@ -378,7 +229,7 @@ def ask_question(model_name, question):
378
 
379
  return {
380
  "role": "assistant",
381
- "content": "Data.csv file not found. Please ensure the data file is in the correct location.",
382
  "gen_code": "",
383
  "ex_code": "",
384
  "last_prompt": question,
@@ -499,7 +350,7 @@ Complete the following code to answer the user's question:
499
  log_interaction(
500
  user_query=question,
501
  model_name=model_name,
502
- response_content=f"Error executing generated code: {error_msg}",
503
  generated_code=full_code if 'full_code' in locals() else "",
504
  execution_time=execution_time,
505
  error_message=error_msg,
@@ -508,7 +359,7 @@ Complete the following code to answer the user's question:
508
 
509
  return {
510
  "role": "assistant",
511
- "content": f"Error executing generated code: {error_msg}",
512
  "gen_code": full_code if 'full_code' in locals() else "",
513
  "ex_code": full_code if 'full_code' in locals() else "",
514
  "last_prompt": question,
@@ -521,13 +372,13 @@ Complete the following code to answer the user's question:
521
 
522
  # Handle specific API errors
523
  if "organization_restricted" in error_msg:
524
- response_content = "API Organization Restricted: Your API key access has been restricted. Please check your Groq API key or try generating a new one."
525
  log_error_msg = "API access restricted"
526
  elif "rate_limit" in error_msg.lower():
527
- response_content = "Rate limit exceeded. Please wait a moment and try again."
528
  log_error_msg = "Rate limit exceeded"
529
  else:
530
- response_content = f"Error: {error_msg}"
531
  log_error_msg = error_msg
532
 
533
  # Log the failed interaction
 
1
  import os
2
  import pandas as pd
 
3
  from typing import Tuple
4
  from PIL import Image
 
5
  from dotenv import load_dotenv
6
  from langchain_groq import ChatGroq
7
  from langchain_google_genai import ChatGoogleGenerativeAI
 
95
  except Exception as e:
96
  raise Exception(f"Error loading dataframe: {e}")
97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  def get_from_user(prompt):
101
  """Format user prompt"""
102
  return {"role": "user", "content": prompt}
103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
 
 
 
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
  def ask_question(model_name, question):
108
  """Ask question with comprehensive error handling and logging"""
 
125
  log_interaction(
126
  user_query=question,
127
  model_name=model_name,
128
+ response_content="Gemini API token not available or empty",
129
  generated_code="",
130
  execution_time=execution_time,
131
  error_message=error_msg,
 
134
 
135
  return {
136
  "role": "assistant",
137
+ "content": "Gemini API token not available or empty. Please set GEMINI_TOKEN in your environment variables.",
138
  "gen_code": "",
139
  "ex_code": "",
140
  "last_prompt": question,
 
154
  log_interaction(
155
  user_query=question,
156
  model_name=model_name,
157
+ response_content="Groq API token not available or empty",
158
  generated_code="",
159
  execution_time=execution_time,
160
  error_message=error_msg,
 
163
 
164
  return {
165
  "role": "assistant",
166
+ "content": "Groq API token not available or empty. Please set GROQ_API_KEY in your environment variables and restart the application.",
167
  "gen_code": "",
168
  "ex_code": "",
169
  "last_prompt": question,
 
185
  error_msg = str(api_error)
186
 
187
  if "organization_restricted" in error_msg.lower() or "unauthorized" in error_msg.lower():
188
+ response_content = "API Key Error: Your Groq API key appears to be invalid, expired, or restricted. Please check your API key in the .env file."
189
  log_error_msg = f"API key validation failed: {error_msg}"
190
  else:
191
+ response_content = f"API Connection Error: {error_msg}"
192
  log_error_msg = error_msg
193
 
194
  # Log the failed interaction
 
220
  log_interaction(
221
  user_query=question,
222
  model_name=model_name,
223
+ response_content="Data.csv file not found",
224
  generated_code="",
225
  execution_time=execution_time,
226
  error_message=error_msg,
 
229
 
230
  return {
231
  "role": "assistant",
232
+ "content": "Data.csv file not found. Please ensure the data file is in the correct location.",
233
  "gen_code": "",
234
  "ex_code": "",
235
  "last_prompt": question,
 
350
  log_interaction(
351
  user_query=question,
352
  model_name=model_name,
353
+ response_content=f"Error executing generated code: {error_msg}",
354
  generated_code=full_code if 'full_code' in locals() else "",
355
  execution_time=execution_time,
356
  error_message=error_msg,
 
359
 
360
  return {
361
  "role": "assistant",
362
+ "content": f"Error executing generated code: {error_msg}",
363
  "gen_code": full_code if 'full_code' in locals() else "",
364
  "ex_code": full_code if 'full_code' in locals() else "",
365
  "last_prompt": question,
 
372
 
373
  # Handle specific API errors
374
  if "organization_restricted" in error_msg:
375
+ response_content = "API Organization Restricted: Your API key access has been restricted. Please check your Groq API key or try generating a new one."
376
  log_error_msg = "API access restricted"
377
  elif "rate_limit" in error_msg.lower():
378
+ response_content = "Rate limit exceeded. Please wait a moment and try again."
379
  log_error_msg = "Rate limit exceeded"
380
  else:
381
+ response_content = f"Error: {error_msg}"
382
  log_error_msg = error_msg
383
 
384
  # Log the failed interaction