import streamlit as st import numpy as np import cv2 from PIL import Image from io import BytesIO from ultralytics import YOLO import os import tempfile import base64 import requests from datetime import datetime from gtts import gTTS from googletrans import Translator import google.generativeai as genai # Import Gemini API # Configuring Google Gemini API GEMINI_API_KEY = os.getenv("GOOGLE_API_KEY") genai.configure(api_key=GEMINI_API_KEY) # Loading YOLO model for crop disease detection yolo_model = YOLO("models/best.pt") # Initializing conversation history if not set if "conversation_history" not in st.session_state: st.session_state.conversation_history = {} # Function to preprocess images def preprocess_image(image, target_size=(224, 224)): """Resize image for AI models.""" image = Image.fromarray(image) image = image.resize(target_size) return image # Generate response from Gemini AI with history def generate_gemini_response(disease_list, user_context="", conversation_history=None): """Generate a structured diagnosis using Gemini API, considering conversation history.""" try: model = genai.GenerativeModel("gemini-1.5-pro") # Start with detected diseases prompt = f""" You are an expert plant pathologist. The detected crop diseases are: {', '.join(disease_list)}. User's context or question: {user_context if user_context else "Provide a general analysis"} """ # Add past conversation history for better continuity if conversation_history: history_text = "\n\nPrevious conversation:\n" for entry in conversation_history: history_text += f"- User: {entry['question']}\n- AI: {entry['response']}\n" prompt += history_text # Ask Gemini for a structured diagnosis prompt += """ Provide a detailed diagnosis including: 1. Symptoms 2. Causes and risk factors 3. Impact on crops 4. Treatment options (short-term & long-term) 5. Prevention strategies """ response = model.generate_content(prompt) return response.text if response else "No response from Gemini." except Exception as e: return f"Error connecting to Gemini API: {str(e)}" # Performing inference using YOLO def inference(image): """Detect crop diseases in the given image.""" results = yolo_model(image, conf=0.4) infer = np.zeros(image.shape, dtype=np.uint8) detected_classes = [] class_names = {} for r in results: infer = r.plot() class_names = r.names detected_classes = r.boxes.cls.tolist() return infer, detected_classes, class_names # Converting text to chosen language speech def text_to_speech(text, language="en"): """Convert text to speech using gTTS.""" try: with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as temp_audio: tts = gTTS(text=text, lang=language, slow=False) tts.save(temp_audio.name) with open(temp_audio.name, "rb") as audio_file: audio_bytes = audio_file.read() os.unlink(temp_audio.name) return audio_bytes except Exception as e: st.error(f"Error generating speech: {str(e)}") return None # Initialize Streamlit UI st.title("AI-Powered Crop Disease Detection & Diagnosis System") # Sidebar settings with st.sidebar: st.header("Settings") # Fake model selection (Still uses Gemini) selected_model = st.selectbox("Choose Model", ["Gemini", "GPT-4", "Claude", "Llama 3", "Mistral"], help="This app always uses Gemini.") confidence_threshold = st.slider("Detection Confidence Threshold", 0.0, 1.0, 0.4) # Text-to-Speech Settings tts_enabled = st.checkbox("Enable Text-to-Speech", value=True) language = st.selectbox("Speech Language", options=["en", "ne", "hi", "bn"], format_func=lambda x: { "en": "English", "ne": "Nepali", "hi": "Hindi", "bn": "Bengali" }[x]) if st.button("Clear Conversation History"): st.session_state.conversation_history = {} st.success("Conversation history cleared!") # User context input with example prompts st.subheader("Provide Initial Context or Ask a Question") # Generalized example prompts for easier input example_prompts = { "Select an example...": "", "General Plant Health Issue": "My plant leaves are wilting and turning yellow. Is this a disease or a nutrient deficiency?", "Leaf Spots and Discoloration": "I see dark spots on my crop leaves. Could this be a fungal or bacterial infection?", "Leaves Drying or Curling": "The leaves on my plants are curling and drying up. What could be causing this?", "Pest or Disease?": "I noticed tiny insects on my plants along with some leaf damage. Could this be a pest problem or a disease?", "Overwatering or Root Rot?": "My plant leaves are turning brown and mushy. Is this due to overwatering or a root infection?", "Poor Crop Growth": "My crops are growing very slowly and seem weak. Could this be due to soil problems or disease?", "Weather and Disease Connection": "It has been raining a lot, and now my plants have mold. Could the weather be causing a fungal disease?", "Regional Disease Concern": "I'm in a humid area and my crops often get infected. What are common diseases for this climate?", } # Dropdown menu for selecting an example selected_example = st.selectbox("Choose an example to auto-fill:", list(example_prompts.keys())) # Auto-fill the text area when an example is selected user_context = st.text_area( "Enter details, symptoms, or a question about your plant condition.", value=example_prompts[selected_example] if selected_example != "Select an example..." else "", placeholder="Example: My plant leaves are turning yellow and wilting. Is this a disease or a nutrient issue?" ) # Upload an image uploaded_file = st.file_uploader("📤 Upload a plant image", type=["jpg", "jpeg", "png"]) if uploaded_file: file_id = uploaded_file.name # Initialize conversation history for this image if not set if file_id not in st.session_state.conversation_history: st.session_state.conversation_history[file_id] = [] # Convert file to image file_bytes = np.asarray(bytearray(uploaded_file.read()), dtype=np.uint8) img = cv2.imdecode(file_bytes, 1) # Perform inference processed_image, detected_classes, class_names = inference(img) # Display processed image with detected diseases st.image(processed_image, caption="🔍 Detected Diseases", use_column_width=True) if detected_classes: detected_disease_names = [class_names[cls] for cls in detected_classes] st.write(f"✅ **Detected Diseases:** {', '.join(detected_disease_names)}") # AI-generated diagnosis from Gemini st.subheader("📋 AI Diagnosis") with st.spinner("Generating diagnosis... 🔄"): diagnosis = generate_gemini_response(detected_disease_names, user_context, st.session_state.conversation_history[file_id]) # Save response to history st.session_state.conversation_history[file_id].append({"question": user_context, "response": diagnosis}) # Display the diagnosis st.write(diagnosis) # Show past conversation history if st.session_state.conversation_history[file_id]: st.subheader("🗂️ Conversation History") for i, entry in enumerate(st.session_state.conversation_history[file_id]): with st.expander(f"Q{i+1}: {entry['question'][:50]}..."): st.write("**User:**", entry["question"]) st.write("**AI:**", entry["response"]) # Convert diagnosis to speech if enabled from googletrans import Translator if tts_enabled: if st.button("🔊 Listen to Diagnosis"): with st.spinner("Generating audio... 🎵"): # Translate the diagnosis to the target language translator = Translator() diagnosis = diagnosis.text translated_text = translator.translate(diagnosis, dest=language) # Filter out unwanted characters like '#' and '*' filtered_text = ''.join(c for c in translated_text if c not in ['#', '*']) # Now process the translated text for TTS audio_bytes = text_to_speech(filtered_text, language) if audio_bytes: st.audio(audio_bytes, format="audio/mp3") else: st.write("❌ No crop disease detected.") # Instructions for users st.markdown(""" --- ### How to Use: 1. Upload an image of a plant leaf with suspected disease. 2. Provide context (optional) about symptoms or concerns. 3. The system detects the disease using AI. 4. Gemini generates a diagnosis with symptoms and treatments. 5. Ask follow-up questions, and the AI will remember previous responses. 6. Optionally, listen to the AI-generated diagnosis. """)