37-AN commited on
Commit
2a735cc
·
1 Parent(s): 28ff371

Update for Hugging Face Space deployment

Browse files
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: Personal AI Assistant with RAG
3
  emoji: 🤗
4
  colorFrom: indigo
5
  colorTo: purple
@@ -9,66 +9,152 @@ pinned: true
9
  license: mit
10
  ---
11
 
12
- # Personal AI Assistant with RAG
13
 
14
- A powerful personal AI assistant that uses Retrieval-Augmented Generation (RAG) to provide responses based on your documents and notes.
15
 
16
  ## Features
17
 
18
- - Uses free Hugging Face models for language processing and embeddings
19
- - Stores and retrieves information in a vector database
20
- - Upload PDF, TXT, and CSV files to expand the knowledge base
21
- - Add direct text input to your knowledge base
22
- - View sources for AI responses
23
- - Conversation history tracking
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- ## How to Use
26
 
27
- 1. **Upload Documents**: Use the sidebar to upload files (PDF, TXT, CSV)
28
- 2. **Add Text**: Enter text directly into the knowledge base
29
- 3. **Ask Questions**: Chat with the assistant about your documents
30
- 4. **View Sources**: See where information is coming from
31
 
32
- ## Deployment
33
 
34
- ### Local Deployment
 
 
 
 
35
 
36
- To run the app locally:
37
 
38
- 1. Clone this repository
39
- 2. Install requirements: `pip install -r requirements.txt`
40
- 3. Run the Streamlit UI: `python run.py --ui`
41
- 4. Or run the API server: `python run.py --api`
42
 
43
- ### Deploying to Hugging Face Spaces
 
 
 
 
 
 
44
 
45
- This application can be easily deployed to Hugging Face Spaces for free hosting:
46
 
47
- 1. Make sure you have a Hugging Face account
48
- 2. Create a Hugging Face API token at https://huggingface.co/settings/tokens
49
- 3. Run the deployment script: `python deploy_to_hf.py`
50
- 4. Follow the prompts to enter your username, token, and space name
51
- 5. Wait for the deployment to complete
52
 
53
- If you encounter any issues during deployment, run `python check_git_status.py` to diagnose and fix common problems.
54
 
55
- The deployment process:
56
- - Creates a Hugging Face Space using the Spaces SDK
57
- - Configures git for pushing to Hugging Face
58
- - Pushes your code to the Space
59
- - Builds and deploys the Docker container automatically
60
 
61
- ## Built With
62
 
63
- - Hugging Face Models
64
- - LLM: google/flan-t5-large
65
- - Embeddings: sentence-transformers/all-MiniLM-L6-v2
66
- - LangChain for orchestration
67
- - Qdrant for vector storage
68
- - Streamlit for UI
69
 
70
- Created by [p3rc03](https://huggingface.co/p3rc03)
 
 
 
 
 
 
 
 
 
 
71
 
72
  ## License
73
 
74
- MIT License - See LICENSE file for details
 
 
 
1
  ---
2
+ title: 🧠 Personal AI Second Brain
3
  emoji: 🤗
4
  colorFrom: indigo
5
  colorTo: purple
 
9
  license: mit
10
  ---
11
 
12
+ # 🧠 Personal AI Second Brain
13
 
14
+ A personalized AI assistant that serves as your second brain, built with Hugging Face, Streamlit, and Telegram integration. This system helps you store and retrieve information from your documents, conversations, and notes through a powerful Retrieval-Augmented Generation (RAG) system.
15
 
16
  ## Features
17
 
18
+ - **Chat Interface**: Ask questions and get answers based on your personal knowledge base
19
+ - **Document Management**: Upload and process documents (PDF, TXT, DOC, etc.)
20
+ - **RAG System**: Retrieve relevant information from your knowledge base
21
+ - **Telegram Integration**: Access your second brain through Telegram
22
+ - **Persistent Chat History**: Store conversations in Hugging Face Datasets
23
+ - **Expandable**: Easy to add new data sources and functionalities
24
+
25
+ ## Architecture
26
+
27
+ The system is built with the following components:
28
+
29
+ 1. **LLM Layer**: Uses Hugging Face models for text generation and embeddings
30
+ 2. **Memory Layer**: Vector database (Qdrant) for storing and retrieving information
31
+ 3. **RAG System**: Retrieval-Augmented Generation to ground answers in your data
32
+ 4. **Ingestion Pipeline**: Process documents and chat history
33
+ 5. **Telegram Bot**: Integration with Telegram for chat-based access
34
+ 6. **Hugging Face Dataset**: Persistent storage for chat history
35
+
36
+ ## Setup
37
+
38
+ ### Requirements
39
+
40
+ - Python 3.8+
41
+ - Hugging Face account (for model access and hosting)
42
+ - Telegram account (for bot integration, optional)
43
+
44
+ ### Installation
45
+
46
+ 1. Clone the repository:
47
+ ```
48
+ git clone <repository-url>
49
+ cd personal-ai-second-brain
50
+ ```
51
+
52
+ 2. Install dependencies:
53
+ ```
54
+ pip install -r requirements.txt
55
+ ```
56
+
57
+ 3. Create a `.env` file with your configuration:
58
+ ```
59
+ # API Keys
60
+ HF_API_KEY=your_huggingface_api_key_here
61
+ TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here
62
+
63
+ # LLM Configuration
64
+ LLM_MODEL=gpt2 # Use small model for Hugging Face Spaces
65
+ EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2
66
+
67
+ # Vector Database
68
+ VECTOR_DB_PATH=./data/vector_db
69
+ COLLECTION_NAME=personal_assistant
70
+
71
+ # Application Settings
72
+ DEFAULT_TEMPERATURE=0.7
73
+ CHUNK_SIZE=512
74
+ CHUNK_OVERLAP=128
75
+ MAX_TOKENS=256
76
+
77
+ # Telegram Bot Settings
78
+ TELEGRAM_ENABLED=false
79
+ TELEGRAM_ALLOWED_USERS= # Comma-separated list of Telegram user IDs
80
+
81
+ # Hugging Face Dataset Settings
82
+ HF_DATASET_NAME=username/second-brain-history # Your username/dataset-name
83
+ CHAT_HISTORY_DIR=./data/chat_history
84
+ SYNC_INTERVAL=60 # How often to sync history to HF (minutes)
85
+ ```
86
+
87
+ 4. Create necessary directories:
88
+ ```
89
+ mkdir -p data/documents data/vector_db data/chat_history
90
+ ```
91
+
92
+ ### Running Locally
93
+
94
+ Start the application:
95
+ ```
96
+ streamlit run app/ui/streamlit_app.py
97
+ ```
98
 
99
+ ### Deploying to Hugging Face Spaces
100
 
101
+ 1. Create a new Space on Hugging Face
102
+ 2. Upload the code to the Space
103
+ 3. Set the environment variables in the Space settings
104
+ 4. The application will automatically start
105
 
106
+ ## Telegram Bot Setup
107
 
108
+ 1. Talk to [@BotFather](https://t.me/botfather) on Telegram
109
+ 2. Use the `/newbot` command to create a new bot
110
+ 3. Get your bot token and add it to your `.env` file
111
+ 4. Set `TELEGRAM_ENABLED=true` in your `.env` file
112
+ 5. To find your Telegram user ID (for restricting access), talk to [@userinfobot](https://t.me/userinfobot)
113
 
114
+ ### Telegram Commands
115
 
116
+ - **/start**: Start a conversation with the bot
117
+ - **/help**: Shows available commands
118
+ - **/search**: Use `/search your query` to search your knowledge base
119
+ - **Direct messages**: Send any message to chat with your second brain
120
 
121
+ ## Hugging Face Dataset Integration
122
+
123
+ To enable persistent chat history across deployments:
124
+
125
+ 1. Create a private dataset repository on Hugging Face Hub
126
+ 2. Set your API token in the `.env` file as `HF_API_KEY`
127
+ 3. Set your dataset name as `HF_DATASET_NAME` (format: username/repo-name)
128
 
129
+ ## Customization
130
 
131
+ ### Using Different Models
 
 
 
 
132
 
133
+ You can change the models by updating the `.env` file:
134
 
135
+ ```
136
+ LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.2
137
+ EMBEDDING_MODEL=sentence-transformers/all-mpnet-base-v2
138
+ ```
 
139
 
140
+ ### Adding Custom Tools
141
 
142
+ To add custom tools to your agent, modify the `app/core/agent.py` file to include additional functionality.
 
 
 
 
 
143
 
144
+ ## Roadmap
145
+
146
+ - [ ] Web search tool integration
147
+ - [ ] Calendar and email integration
148
+ - [ ] Voice interface
149
+ - [ ] Mobile app integration
150
+ - [ ] Fine-tuning for personalized responses
151
+
152
+ ## Contributing
153
+
154
+ Contributions are welcome! Please feel free to submit a Pull Request.
155
 
156
  ## License
157
 
158
+ This project is licensed under the MIT License - see the LICENSE file for details.
159
+
160
+ Created by [p3rc03](https://huggingface.co/p3rc03)
app/config.py CHANGED
@@ -9,6 +9,7 @@ load_dotenv(dotenv_path=env_path)
9
 
10
  # API Keys
11
  HF_API_KEY = os.getenv('HF_API_KEY', '')
 
12
 
13
  # LLM Configuration
14
  # Use models that are freely accessible and don't require authentication
@@ -38,12 +39,27 @@ CHUNK_SIZE = int(os.getenv('CHUNK_SIZE', 512))
38
  CHUNK_OVERLAP = int(os.getenv('CHUNK_OVERLAP', 128))
39
  MAX_TOKENS = int(os.getenv('MAX_TOKENS', 256))
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  # Create a template .env file if it doesn't exist
42
  def create_env_example():
43
  if not os.path.exists('.env.example'):
44
  with open('.env.example', 'w') as f:
45
  f.write("""# API Keys
46
  HF_API_KEY=your_huggingface_api_key_here
 
47
 
48
  # LLM Configuration
49
  LLM_MODEL=gpt2 # Use small model for Hugging Face Spaces
@@ -58,4 +74,13 @@ DEFAULT_TEMPERATURE=0.7
58
  CHUNK_SIZE=512
59
  CHUNK_OVERLAP=128
60
  MAX_TOKENS=256
 
 
 
 
 
 
 
 
 
61
  """)
 
9
 
10
  # API Keys
11
  HF_API_KEY = os.getenv('HF_API_KEY', '')
12
+ TELEGRAM_BOT_TOKEN = os.getenv('TELEGRAM_BOT_TOKEN', '')
13
 
14
  # LLM Configuration
15
  # Use models that are freely accessible and don't require authentication
 
39
  CHUNK_OVERLAP = int(os.getenv('CHUNK_OVERLAP', 128))
40
  MAX_TOKENS = int(os.getenv('MAX_TOKENS', 256))
41
 
42
+ # Telegram Bot Settings
43
+ TELEGRAM_ENABLED = os.getenv('TELEGRAM_ENABLED', 'false').lower() == 'true'
44
+ TELEGRAM_ALLOWED_USERS = os.getenv('TELEGRAM_ALLOWED_USERS', '')
45
+ if TELEGRAM_ALLOWED_USERS:
46
+ TELEGRAM_ALLOWED_USERS = [int(user_id.strip()) for user_id in TELEGRAM_ALLOWED_USERS.split(',')]
47
+ else:
48
+ TELEGRAM_ALLOWED_USERS = []
49
+
50
+ # Hugging Face Dataset Settings for Chat History
51
+ HF_DATASET_NAME = os.getenv('HF_DATASET_NAME', '') # Format: username/repo-name
52
+ CHAT_HISTORY_DIR = os.getenv('CHAT_HISTORY_DIR', './data/chat_history')
53
+ # How often to sync chat history to HF Hub (in minutes)
54
+ SYNC_INTERVAL = int(os.getenv('SYNC_INTERVAL', 60))
55
+
56
  # Create a template .env file if it doesn't exist
57
  def create_env_example():
58
  if not os.path.exists('.env.example'):
59
  with open('.env.example', 'w') as f:
60
  f.write("""# API Keys
61
  HF_API_KEY=your_huggingface_api_key_here
62
+ TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here
63
 
64
  # LLM Configuration
65
  LLM_MODEL=gpt2 # Use small model for Hugging Face Spaces
 
74
  CHUNK_SIZE=512
75
  CHUNK_OVERLAP=128
76
  MAX_TOKENS=256
77
+
78
+ # Telegram Bot Settings
79
+ TELEGRAM_ENABLED=false
80
+ TELEGRAM_ALLOWED_USERS= # Comma-separated list of Telegram user IDs
81
+
82
+ # Hugging Face Dataset Settings
83
+ HF_DATASET_NAME=username/second-brain-history # Your username/dataset-name
84
+ CHAT_HISTORY_DIR=./data/chat_history
85
+ SYNC_INTERVAL=60 # How often to sync history to HF (minutes)
86
  """)
app/core/chat_history.py ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+ import uuid
4
+ import json
5
+ import pandas as pd
6
+ from datetime import datetime
7
+ from typing import List, Dict, Any, Optional
8
+ from datasets import Dataset, load_dataset
9
+ from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
10
+
11
+ # Configure logging
12
+ logging.basicConfig(level=logging.INFO)
13
+ logger = logging.getLogger(__name__)
14
+
15
+ class ChatHistoryManager:
16
+ """
17
+ Manages chat history persistence using Hugging Face Datasets.
18
+ Supports both local storage and syncing to Hugging Face Hub.
19
+ """
20
+
21
+ def __init__(self, dataset_name=None, local_dir="./data/chat_history"):
22
+ """
23
+ Initialize the chat history manager.
24
+
25
+ Args:
26
+ dataset_name: Hugging Face dataset name (username/repo)
27
+ local_dir: Local directory to store chat history
28
+ """
29
+ self.dataset_name = dataset_name or os.getenv("HF_DATASET_NAME")
30
+ self.local_dir = local_dir
31
+ self.hf_api = HfApi()
32
+ self.token = os.getenv("HF_API_KEY")
33
+
34
+ # Create local directory if it doesn't exist
35
+ os.makedirs(self.local_dir, exist_ok=True)
36
+
37
+ # Local path for the jsonl file
38
+ self.local_file = os.path.join(self.local_dir, "chat_history.jsonl")
39
+
40
+ # Ensure the file exists
41
+ if not os.path.exists(self.local_file):
42
+ with open(self.local_file, "w") as f:
43
+ f.write("")
44
+
45
+ logger.info(f"Chat history manager initialized with local file: {self.local_file}")
46
+ if self.dataset_name:
47
+ logger.info(f"Will sync to HF dataset: {self.dataset_name}")
48
+
49
+ def load_history(self) -> List[Dict[str, Any]]:
50
+ """Load chat history from local file or Hugging Face dataset."""
51
+ try:
52
+ # First try to load from local file
53
+ if os.path.exists(self.local_file) and os.path.getsize(self.local_file) > 0:
54
+ with open(self.local_file, "r") as f:
55
+ lines = f.readlines()
56
+ history = [json.loads(line) for line in lines if line.strip()]
57
+ logger.info(f"Loaded {len(history)} conversations from local file")
58
+ return history
59
+
60
+ # If local file is empty or doesn't exist, try to load from HF
61
+ if self.dataset_name and self.token:
62
+ try:
63
+ dataset = load_dataset(self.dataset_name, token=self.token)
64
+ history = dataset["train"].to_pandas().to_dict("records")
65
+ logger.info(f"Loaded {len(history)} conversations from Hugging Face")
66
+
67
+ # Write back to local file
68
+ self._write_history_to_local(history)
69
+ return history
70
+ except Exception as e:
71
+ logger.warning(f"Error loading from Hugging Face: {e}")
72
+
73
+ # If we get here, return empty history
74
+ return []
75
+ except Exception as e:
76
+ logger.error(f"Error loading chat history: {e}")
77
+ return []
78
+
79
+ def save_conversation(self, conversation: Dict[str, Any]) -> bool:
80
+ """
81
+ Save a conversation to history.
82
+
83
+ Args:
84
+ conversation: Dict with keys: user_query, assistant_response,
85
+ timestamp, sources (optional)
86
+
87
+ Returns:
88
+ bool: True if successful
89
+ """
90
+ try:
91
+ # Add ID and timestamp if not present
92
+ if "id" not in conversation:
93
+ conversation["id"] = str(uuid.uuid4())
94
+ if "timestamp" not in conversation:
95
+ conversation["timestamp"] = datetime.now().isoformat()
96
+
97
+ # Append to local file
98
+ with open(self.local_file, "a") as f:
99
+ f.write(json.dumps(conversation) + "\n")
100
+
101
+ logger.info(f"Saved conversation to local file: {conversation['id']}")
102
+ return True
103
+ except Exception as e:
104
+ logger.error(f"Error saving conversation: {e}")
105
+ return False
106
+
107
+ def sync_to_hub(self) -> bool:
108
+ """Sync local chat history to Hugging Face Hub."""
109
+ if not self.dataset_name or not self.token:
110
+ logger.warning("Cannot sync to Hub: missing dataset name or token")
111
+ return False
112
+
113
+ try:
114
+ # Read the local file
115
+ history = self.load_history()
116
+ if not history:
117
+ logger.warning("No history to sync")
118
+ return False
119
+
120
+ # Create a Dataset object
121
+ ds = Dataset.from_pandas(
122
+ pd.DataFrame(history)
123
+ )
124
+
125
+ # Push to Hub
126
+ ds.push_to_hub(
127
+ self.dataset_name,
128
+ token=self.token,
129
+ private=True
130
+ )
131
+
132
+ logger.info(f"Successfully synced {len(history)} conversations to Hugging Face Hub")
133
+ return True
134
+ except Exception as e:
135
+ logger.error(f"Error syncing to Hub: {e}")
136
+ return False
137
+
138
+ def _write_history_to_local(self, history: List[Dict[str, Any]]) -> bool:
139
+ """Write history list to local file."""
140
+ try:
141
+ with open(self.local_file, "w") as f:
142
+ for conversation in history:
143
+ f.write(json.dumps(conversation) + "\n")
144
+ return True
145
+ except Exception as e:
146
+ logger.error(f"Error writing history to local file: {e}")
147
+ return False
148
+
149
+ def get_conversations_by_date(self, start_date=None, end_date=None) -> List[Dict[str, Any]]:
150
+ """Get conversations filtered by date range."""
151
+ history = self.load_history()
152
+
153
+ if not start_date and not end_date:
154
+ return history
155
+
156
+ filtered = []
157
+ for conv in history:
158
+ timestamp = conv.get("timestamp", "")
159
+ if not timestamp:
160
+ continue
161
+
162
+ try:
163
+ conv_date = datetime.fromisoformat(timestamp)
164
+
165
+ if start_date and end_date:
166
+ if start_date <= conv_date <= end_date:
167
+ filtered.append(conv)
168
+ elif start_date:
169
+ if start_date <= conv_date:
170
+ filtered.append(conv)
171
+ elif end_date:
172
+ if conv_date <= end_date:
173
+ filtered.append(conv)
174
+ except ValueError:
175
+ continue
176
+
177
+ return filtered
178
+
179
+ def search_conversations(self, query: str) -> List[Dict[str, Any]]:
180
+ """Search conversations by keyword (simple text match)."""
181
+ history = self.load_history()
182
+ query = query.lower()
183
+
184
+ results = []
185
+ for conv in history:
186
+ user_query = conv.get("user_query", "").lower()
187
+ assistant_response = conv.get("assistant_response", "").lower()
188
+
189
+ if query in user_query or query in assistant_response:
190
+ results.append(conv)
191
+
192
+ return results
app/core/discord_bot.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import discord
2
+ import asyncio
3
+ import logging
4
+ import os
5
+ from typing import Dict, List, Any
6
+ import threading
7
+ from discord.ext import commands
8
+
9
+ # Configure logging
10
+ logging.basicConfig(level=logging.INFO)
11
+ logger = logging.getLogger(__name__)
12
+
13
+ class DiscordBot:
14
+ """
15
+ Discord bot integration for the AI second brain.
16
+ Handles message ingestion, responses, and synchronization with the main app.
17
+ """
18
+
19
+ def __init__(self, agent, token=None, channel_whitelist=None):
20
+ """
21
+ Initialize the Discord bot.
22
+
23
+ Args:
24
+ agent: The AssistantAgent instance to use for processing queries
25
+ token: Discord bot token (defaults to environment variable)
26
+ channel_whitelist: List of channel IDs to listen to (None for all)
27
+ """
28
+ self.agent = agent
29
+ self.token = token or os.getenv("DISCORD_BOT_TOKEN")
30
+ self.channel_whitelist = channel_whitelist or []
31
+ self.message_history = []
32
+
33
+ # Set up Discord client with intents
34
+ intents = discord.Intents.default()
35
+ intents.message_content = True # Required to read message content
36
+ self.client = commands.Bot(command_prefix="!", intents=intents)
37
+
38
+ # Register event handlers
39
+ self.setup_event_handlers()
40
+
41
+ # Thread for bot
42
+ self.bot_thread = None
43
+
44
+ logger.info("Discord bot initialized")
45
+
46
+ def setup_event_handlers(self):
47
+ """Register event handlers for the Discord client."""
48
+
49
+ @self.client.event
50
+ async def on_ready():
51
+ logger.info(f"Discord bot logged in as {self.client.user}")
52
+
53
+ @self.client.event
54
+ async def on_message(message):
55
+ # Don't respond to self
56
+ if message.author == self.client.user:
57
+ return
58
+
59
+ # Check if this is a command
60
+ await self.client.process_commands(message)
61
+
62
+ # Only process messages in whitelisted channels if whitelist exists
63
+ if self.channel_whitelist and message.channel.id not in self.channel_whitelist:
64
+ return
65
+
66
+ # Only respond to messages that mention the bot or are DMs
67
+ is_dm = isinstance(message.channel, discord.DMChannel)
68
+ is_mentioned = self.client.user in message.mentions
69
+
70
+ if is_dm or is_mentioned:
71
+ await self.process_message(message)
72
+
73
+ # Add a !help command
74
+ @self.client.command(name="help")
75
+ async def help_command(ctx):
76
+ help_text = """
77
+ **AI Assistant Commands**
78
+ - Mention me with a question to get an answer
79
+ - Send me a DM with your query
80
+ - Use `!search <query>` to search your knowledge base
81
+ - Use `!upload` with an attachment to add to your knowledge base
82
+ """
83
+ await ctx.send(help_text)
84
+
85
+ # Add a search command
86
+ @self.client.command(name="search")
87
+ async def search_command(ctx, *, query):
88
+ async with ctx.typing():
89
+ response = await self.process_query(query)
90
+ await ctx.send(response["answer"])
91
+
92
+ # If there are sources, show them in a followup message
93
+ if response["sources"]:
94
+ sources_text = "**Sources:**\n" + "\n".join([
95
+ f"- {s['file_name']} ({s['source']})"
96
+ for s in response["sources"]
97
+ ])
98
+ await ctx.send(sources_text)
99
+
100
+ async def process_message(self, message):
101
+ """Process a Discord message and send a response."""
102
+ # Clean up mention and extract query
103
+ content = message.content
104
+ for mention in message.mentions:
105
+ content = content.replace(f'<@{mention.id}>', '').replace(f'<@!{mention.id}>', '')
106
+
107
+ query = content.strip()
108
+ if not query:
109
+ await message.channel.send("How can I help you?")
110
+ return
111
+
112
+ # Show typing indicator
113
+ async with message.channel.typing():
114
+ # Process the query and get a response
115
+ response = await self.process_query(query)
116
+
117
+ # Store in message history
118
+ self.message_history.append({
119
+ "user": str(message.author),
120
+ "query": query,
121
+ "response": response["answer"],
122
+ "timestamp": message.created_at.isoformat(),
123
+ "channel": str(message.channel)
124
+ })
125
+
126
+ # Send the response
127
+ await message.channel.send(response["answer"])
128
+
129
+ # If there are sources, send them in a followup message
130
+ if response["sources"]:
131
+ sources_text = "**Sources:**\n" + "\n".join([
132
+ f"- {s['file_name']} ({s['source']})"
133
+ for s in response["sources"]
134
+ ])
135
+ await message.channel.send(sources_text)
136
+
137
+ async def process_query(self, query):
138
+ """Process a query using the agent and return a response."""
139
+ # Run the query in a thread to avoid blocking the event loop
140
+ loop = asyncio.get_event_loop()
141
+ response = await loop.run_in_executor(None, self.agent.query, query)
142
+
143
+ # Add the conversation to the agent's memory
144
+ if "answer" in response:
145
+ await loop.run_in_executor(
146
+ None,
147
+ self.agent.add_conversation_to_memory,
148
+ query,
149
+ response["answer"]
150
+ )
151
+
152
+ return response
153
+
154
+ def start(self):
155
+ """Start the Discord bot in a separate thread."""
156
+ if not self.token:
157
+ logger.error("Discord bot token not found")
158
+ return False
159
+
160
+ def run_bot():
161
+ asyncio.set_event_loop(asyncio.new_event_loop())
162
+ self.client.run(self.token)
163
+
164
+ self.bot_thread = threading.Thread(target=run_bot, daemon=True)
165
+ self.bot_thread.start()
166
+ logger.info("Discord bot started in background thread")
167
+ return True
168
+
169
+ def stop(self):
170
+ """Stop the Discord bot."""
171
+ if self.client and self.client.is_ready():
172
+ asyncio.run_coroutine_threadsafe(self.client.close(), self.client.loop)
173
+ logger.info("Discord bot stopped")
174
+
175
+ def get_message_history(self):
176
+ """Get the message history."""
177
+ return self.message_history
app/core/ingestion.py CHANGED
@@ -5,7 +5,7 @@ import time
5
  import random
6
  import traceback
7
  from typing import List, Dict, Any
8
- from langchain.document_loaders import (
9
  PyPDFLoader,
10
  TextLoader,
11
  CSVLoader,
 
5
  import random
6
  import traceback
7
  from typing import List, Dict, Any
8
+ from langchain_community.document_loaders import (
9
  PyPDFLoader,
10
  TextLoader,
11
  CSVLoader,
app/core/llm.py CHANGED
@@ -1,4 +1,4 @@
1
- from langchain.llms import HuggingFaceHub
2
  from langchain_community.llms import HuggingFaceEndpoint, HuggingFacePipeline
3
  from langchain_community.embeddings import HuggingFaceEmbeddings
4
  from langchain.chains import LLMChain
 
1
+ from langchain_community.llms import HuggingFaceHub
2
  from langchain_community.llms import HuggingFaceEndpoint, HuggingFacePipeline
3
  from langchain_community.embeddings import HuggingFaceEmbeddings
4
  from langchain.chains import LLMChain
app/core/memory.py CHANGED
@@ -3,7 +3,7 @@ import sys
3
  import time
4
  import random
5
  import logging
6
- from langchain.vectorstores import Qdrant
7
  from langchain.chains import ConversationalRetrievalChain
8
  from langchain.memory import ConversationBufferMemory
9
  from qdrant_client import QdrantClient
 
3
  import time
4
  import random
5
  import logging
6
+ from langchain_community.vectorstores import Qdrant
7
  from langchain.chains import ConversationalRetrievalChain
8
  from langchain.memory import ConversationBufferMemory
9
  from qdrant_client import QdrantClient
app/core/telegram_bot.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+ import threading
4
+ import asyncio
5
+ from typing import Dict, List, Any
6
+ import time
7
+ from datetime import datetime
8
+ from telegram import Update, Bot
9
+ from telegram.ext import Application, CommandHandler, MessageHandler, ContextTypes, filters
10
+
11
+ # Configure logging
12
+ logging.basicConfig(level=logging.INFO)
13
+ logger = logging.getLogger(__name__)
14
+
15
+ class TelegramBot:
16
+ """
17
+ Telegram bot integration for the AI second brain.
18
+ Handles message ingestion, responses, and synchronization with the main app.
19
+ """
20
+
21
+ def __init__(self, agent, token=None, allowed_user_ids=None):
22
+ """
23
+ Initialize the Telegram bot.
24
+
25
+ Args:
26
+ agent: The AssistantAgent instance to use for processing queries
27
+ token: Telegram bot token (defaults to environment variable)
28
+ allowed_user_ids: List of Telegram user IDs that can use the bot (None for all)
29
+ """
30
+ self.agent = agent
31
+ self.token = token or os.getenv("TELEGRAM_BOT_TOKEN")
32
+ self.allowed_user_ids = allowed_user_ids or []
33
+ if isinstance(self.allowed_user_ids, str):
34
+ # Convert comma-separated string to list of integers
35
+ self.allowed_user_ids = [int(uid.strip()) for uid in self.allowed_user_ids.split(',') if uid.strip()]
36
+ self.message_history = []
37
+
38
+ # Initialize bot application
39
+ self.application = None
40
+ self.bot_thread = None
41
+
42
+ logger.info("Telegram bot initialized")
43
+
44
+ async def start_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
45
+ """Handle the /start command."""
46
+ user_name = update.message.from_user.first_name
47
+ await update.message.reply_text(
48
+ f"Hello {user_name}! I'm your AI Second Brain assistant. Ask me anything or use /help to see available commands."
49
+ )
50
+
51
+ async def help_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
52
+ """Handle the /help command."""
53
+ help_text = """
54
+ *AI Second Brain Commands*
55
+ - Just send me a message with your question
56
+ - /search query - Search your knowledge base
57
+ - /help - Show this help message
58
+ """
59
+ await update.message.reply_text(help_text, parse_mode='Markdown')
60
+
61
+ async def search_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
62
+ """Handle the /search command."""
63
+ # Check if user is allowed
64
+ if self.allowed_user_ids and update.message.from_user.id not in self.allowed_user_ids:
65
+ await update.message.reply_text("You're not authorized to use this bot.")
66
+ return
67
+
68
+ query = ' '.join(context.args)
69
+ if not query:
70
+ await update.message.reply_text("Please provide a search query: /search your query here")
71
+ return
72
+
73
+ # Show typing status
74
+ await context.bot.send_chat_action(chat_id=update.effective_chat.id, action="typing")
75
+
76
+ # Process the query
77
+ try:
78
+ response = await self.process_query(query)
79
+
80
+ # Send the response
81
+ await update.message.reply_text(response["answer"])
82
+
83
+ # If there are sources, send them in a followup message
84
+ if response["sources"]:
85
+ sources_text = "*Sources:*\n" + "\n".join([
86
+ f"- {s['file_name']} ({s['source']})"
87
+ for s in response["sources"]
88
+ ])
89
+ await update.message.reply_text(sources_text, parse_mode='Markdown')
90
+ except Exception as e:
91
+ logger.error(f"Error processing search: {e}")
92
+ await update.message.reply_text(f"Error processing your search: {str(e)}")
93
+
94
+ async def handle_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE):
95
+ """Handle normal messages."""
96
+ # Check if user is allowed
97
+ if self.allowed_user_ids and update.message.from_user.id not in self.allowed_user_ids:
98
+ await update.message.reply_text("You're not authorized to use this bot.")
99
+ return
100
+
101
+ # Get the message text
102
+ query = update.message.text
103
+
104
+ # Show typing status
105
+ await context.bot.send_chat_action(chat_id=update.effective_chat.id, action="typing")
106
+
107
+ # Process the query
108
+ try:
109
+ # Process the message
110
+ response = await self.process_query(query)
111
+
112
+ # Store in message history
113
+ self.message_history.append({
114
+ "user": update.message.from_user.username or str(update.message.from_user.id),
115
+ "user_id": update.message.from_user.id,
116
+ "query": query,
117
+ "response": response["answer"],
118
+ "timestamp": datetime.now().isoformat(),
119
+ "chat_id": update.effective_chat.id
120
+ })
121
+
122
+ # Send the response
123
+ await update.message.reply_text(response["answer"])
124
+
125
+ # If there are sources, send them in a followup message
126
+ if response["sources"]:
127
+ sources_text = "*Sources:*\n" + "\n".join([
128
+ f"- {s['file_name']} ({s['source']})"
129
+ for s in response["sources"]
130
+ ])
131
+ await update.message.reply_text(sources_text, parse_mode='Markdown')
132
+ except Exception as e:
133
+ logger.error(f"Error processing message: {e}")
134
+ await update.message.reply_text(f"I encountered an error: {str(e)}")
135
+
136
+ async def error_handler(self, update, context):
137
+ """Handle errors."""
138
+ logger.error(f"Error: {context.error} - caused by update {update}")
139
+
140
+ # Send a message to the user
141
+ if update and update.effective_chat:
142
+ await context.bot.send_message(
143
+ chat_id=update.effective_chat.id,
144
+ text="Sorry, an error occurred while processing your message."
145
+ )
146
+
147
+ async def process_query(self, query):
148
+ """Process a query using the agent and return a response."""
149
+ # Create a new event loop for the thread
150
+ loop = asyncio.get_event_loop()
151
+
152
+ # Run the query in a separate thread to avoid blocking
153
+ def run_query():
154
+ return self.agent.query(query)
155
+
156
+ # Execute the query
157
+ response = await loop.run_in_executor(None, run_query)
158
+
159
+ # Add the conversation to the agent's memory
160
+ if "answer" in response:
161
+ def add_to_memory():
162
+ self.agent.add_conversation_to_memory(query, response["answer"])
163
+
164
+ await loop.run_in_executor(None, add_to_memory)
165
+
166
+ return response
167
+
168
+ def setup_application(self):
169
+ """Set up the Telegram application with handlers."""
170
+ # Create the Application
171
+ self.application = Application.builder().token(self.token).build()
172
+
173
+ # Add command handlers
174
+ self.application.add_handler(CommandHandler("start", self.start_command))
175
+ self.application.add_handler(CommandHandler("help", self.help_command))
176
+ self.application.add_handler(CommandHandler("search", self.search_command))
177
+
178
+ # Add message handler
179
+ self.application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, self.handle_message))
180
+
181
+ # Add error handler
182
+ self.application.add_error_handler(self.error_handler)
183
+
184
+ logger.info("Telegram application set up successfully")
185
+
186
+ def start(self):
187
+ """Start the Telegram bot in a separate thread."""
188
+ if not self.token:
189
+ logger.error("Telegram bot token not found")
190
+ return False
191
+
192
+ try:
193
+ # Set up the application
194
+ self.setup_application()
195
+
196
+ # Run the bot in a separate thread
197
+ def run_bot():
198
+ asyncio.set_event_loop(asyncio.new_event_loop())
199
+ self.application.run_polling(stop_signals=None)
200
+
201
+ self.bot_thread = threading.Thread(target=run_bot, daemon=True)
202
+ self.bot_thread.start()
203
+
204
+ logger.info("Telegram bot started in background thread")
205
+ return True
206
+ except Exception as e:
207
+ logger.error(f"Error starting Telegram bot: {e}")
208
+ return False
209
+
210
+ def stop(self):
211
+ """Stop the Telegram bot."""
212
+ if self.application:
213
+ logger.info("Stopping Telegram bot...")
214
+
215
+ async def stop_app():
216
+ await self.application.stop()
217
+ await self.application.shutdown()
218
+
219
+ # Run the stop function in a new event loop
220
+ loop = asyncio.new_event_loop()
221
+ asyncio.set_event_loop(loop)
222
+ try:
223
+ loop.run_until_complete(stop_app())
224
+ finally:
225
+ loop.close()
226
+
227
+ logger.info("Telegram bot stopped")
228
+ return True
229
+ return False
230
+
231
+ def get_message_history(self):
232
+ """Get the message history."""
233
+ return self.message_history
app/ui/streamlit_app.py CHANGED
@@ -3,6 +3,7 @@ import os
3
  import sys
4
  import tempfile
5
  from datetime import datetime
 
6
  from typing import List, Dict, Any
7
  import time
8
  import logging
@@ -18,20 +19,32 @@ sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(
18
  try:
19
  from app.core.agent import AssistantAgent
20
  from app.core.ingestion import DocumentProcessor
 
 
21
  from app.utils.helpers import get_document_path, format_sources, save_conversation, copy_uploaded_file
22
- from app.config import LLM_MODEL, EMBEDDING_MODEL
 
 
 
 
23
  except ImportError:
24
  # Fallback to direct imports if app is not recognized as a package
25
  sys.path.append(os.path.abspath('.'))
26
  from app.core.agent import AssistantAgent
27
  from app.core.ingestion import DocumentProcessor
 
 
28
  from app.utils.helpers import get_document_path, format_sources, save_conversation, copy_uploaded_file
29
- from app.config import LLM_MODEL, EMBEDDING_MODEL
 
 
 
 
30
 
31
  # Set page config
32
  st.set_page_config(
33
- page_title="Personal AI Assistant (Hugging Face)",
34
- page_icon="🤗",
35
  layout="wide"
36
  )
37
 
@@ -75,23 +88,146 @@ def get_document_processor(_agent):
75
  return ["dummy-id"]
76
  return DummyProcessor()
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  # Initialize session state variables
79
  if "messages" not in st.session_state:
80
  st.session_state.messages = []
 
 
 
 
 
 
81
 
82
- # Initialize agent and document processor with caching to prevent multiple instances
83
  agent = get_agent()
84
  document_processor = get_document_processor(agent)
 
 
85
 
86
- # App title
87
- st.title("🤗 Personal AI Assistant (Hugging Face)")
 
 
 
 
 
 
 
 
 
88
 
89
- # Create a sidebar for uploading documents and settings
90
- with st.sidebar:
91
- st.header("Upload Documents")
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
- # Add file uploader with error handling
94
- try:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
  st.subheader("Upload a File")
96
 
97
  # Show supported file types info
@@ -198,168 +334,270 @@ with st.sidebar:
198
  if "403" in str(e) or "forbidden" in str(e).lower():
199
  st.warning("This appears to be a permissions issue. Try using a different file format or using the text input option instead.")
200
  elif "unsupported" in str(e).lower() or "not supported" in str(e).lower() or "no specific loader" in str(e).lower():
201
- st.warning("This file type may not be fully supported. Try converting to PDF or TXT format.")
202
- elif "memory" in str(e).lower():
203
- st.warning("The file may be too large to process. Try a smaller file or split the content.")
204
- elif "timeout" in str(e).lower():
205
- st.warning("Processing timed out. Try a smaller file or try again later.")
206
-
207
- # Show troubleshooting tips
208
- with st.expander("Troubleshooting Tips"):
209
- st.markdown("""
210
- - Convert your document to PDF or plain text format
211
- - Try a smaller file (under 1MB)
212
- - Remove any password protection from the file
213
- - Try the text input option below instead
214
- - Check if the file contains complex formatting or images
215
- """)
216
-
217
- st.markdown("---")
218
- except Exception as e:
219
- logger.error(f"File uploader error: {str(e)}")
220
- st.error(f"File upload functionality is currently unavailable: {str(e)}")
221
 
222
- st.subheader("Raw Text Input")
223
- st.markdown("Alternatively, paste text directly to add to the knowledge base:")
224
- text_input = st.text_area("Enter text to add to the knowledge base", height=150)
225
-
226
- if st.button("Add Text"):
227
- if text_input:
228
- with st.spinner("Adding text to knowledge base..."):
 
 
 
 
 
229
  try:
230
- # Create metadata
231
- metadata = {
232
- "type": "manual_input",
233
- "timestamp": str(datetime.now())
234
- }
235
-
236
- # Ingest the text with progress indication
237
- status_text = st.empty()
238
- status_text.info("Processing text...")
239
 
240
- # Ingest the text
241
- ids = document_processor.ingest_text(text_input, metadata)
242
-
243
- if ids and not any(str(id).startswith("error-") for id in ids):
244
- status_text.success("✅ Text added to knowledge base successfully!")
245
  else:
246
- status_text.warning("⚠️ Text processing completed with warnings")
247
  except Exception as e:
248
- logger.error(f"Error adding text: {str(e)}")
249
- st.error(f"Error adding text: {str(e)}")
250
- else:
251
- st.warning("Please enter some text to add")
 
 
 
252
 
253
- # Display model information
254
- st.header("Models")
255
- st.write(f"**LLM**: [{LLM_MODEL}](https://huggingface.co/{LLM_MODEL})")
256
- st.write(f"**Embeddings**: [{EMBEDDING_MODEL}](https://huggingface.co/{EMBEDDING_MODEL})")
257
 
258
- # Add Hugging Face deployment info
259
- st.header("Deployment")
260
- st.write("This app can be easily deployed to [Hugging Face Spaces](https://huggingface.co/spaces) for free hosting.")
261
 
262
- # Link to Hugging Face
263
- st.markdown("""
264
- <div style="text-align: center; margin-top: 20px;">
265
- <a href="https://huggingface.co" target="_blank">
266
- <img src="https://huggingface.co/front/assets/huggingface_logo.svg" width="200" alt="Hugging Face">
267
- </a>
268
- </div>
269
- """, unsafe_allow_html=True)
270
-
271
- # Display chat messages
272
- for message in st.session_state.messages:
273
- with st.chat_message(message["role"]):
274
- st.write(message["content"])
 
 
 
275
 
276
- # Display sources if available
277
- if message["role"] == "assistant" and "sources" in message:
278
- with st.expander("View Sources"):
279
- sources = message["sources"]
280
- if sources:
281
- for i, source in enumerate(sources, 1):
282
- st.write(f"{i}. {source.get('file_name', 'Unknown')}" +
283
- (f" (Page {source['page']})" if source.get('page') else ""))
284
- st.text(source.get('content', 'No content available'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
285
  else:
286
- st.write("No specific sources used.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
287
 
288
- # Chat input
289
- if prompt := st.chat_input("Ask a question..."):
290
- # Add user message to chat history
291
- st.session_state.messages.append({"role": "user", "content": prompt})
292
 
293
- # Display user message
294
- with st.chat_message("user"):
295
- st.write(prompt)
296
 
297
- # Generate response
298
- with st.chat_message("assistant"):
299
- with st.spinner("Thinking..."):
300
- try:
301
- # Add retry mechanism for vector store issues
302
- max_retries = 3
303
- for attempt in range(max_retries):
304
- try:
305
- response = agent.query(prompt)
306
- break
307
- except Exception as e:
308
- if "already accessed by another instance" in str(e) and attempt < max_retries - 1:
309
- logger.warning(f"Vector store access conflict, retrying ({attempt+1}/{max_retries})...")
310
- time.sleep(1) # Wait before retrying
311
- else:
312
- raise
313
-
314
- # Extract answer and sources, with fallbacks if missing
315
- answer = response.get("answer", "I couldn't generate a proper response.")
316
- sources = response.get("sources", [])
317
-
318
- # Display the response
319
- st.write(answer)
320
-
321
- # Display sources in an expander
322
- with st.expander("View Sources"):
323
- if sources:
324
- for i, source in enumerate(sources, 1):
325
- st.write(f"{i}. {source.get('file_name', 'Unknown')}" +
326
- (f" (Page {source['page']})" if source.get('page') else ""))
327
- st.text(source.get('content', 'No content available'))
328
- else:
329
- st.write("No specific sources used.")
330
-
331
- # Save conversation
332
  try:
333
- save_conversation(prompt, answer, sources)
334
- except Exception as save_error:
335
- logger.error(f"Error saving conversation: {save_error}")
336
-
337
- # Add assistant response to chat history
338
- st.session_state.messages.append({
339
- "role": "assistant",
340
- "content": answer,
341
- "sources": sources
342
- })
343
-
344
- # Update the agent's memory
 
345
  try:
346
- agent.add_conversation_to_memory(prompt, answer)
347
- except Exception as memory_error:
348
- logger.error(f"Error adding to memory: {memory_error}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
349
 
350
- except Exception as e:
351
- error_msg = f"Error generating response: {str(e)}"
352
- logger.error(error_msg)
353
- st.error(error_msg)
354
- st.session_state.messages.append({
355
- "role": "assistant",
356
- "content": "I'm sorry, I encountered an error while processing your request. Please try again or refresh the page.",
357
- "sources": []
358
- })
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359
 
360
- # Add a footer
361
- st.markdown("---")
362
- st.markdown("Built with LangChain, Hugging Face, and Qdrant")
 
 
 
 
 
 
 
363
 
364
  if __name__ == "__main__":
365
  # This is used when running the file directly
 
3
  import sys
4
  import tempfile
5
  from datetime import datetime
6
+ import pandas as pd
7
  from typing import List, Dict, Any
8
  import time
9
  import logging
 
19
  try:
20
  from app.core.agent import AssistantAgent
21
  from app.core.ingestion import DocumentProcessor
22
+ from app.core.telegram_bot import TelegramBot
23
+ from app.core.chat_history import ChatHistoryManager
24
  from app.utils.helpers import get_document_path, format_sources, save_conversation, copy_uploaded_file
25
+ from app.config import (
26
+ LLM_MODEL, EMBEDDING_MODEL, TELEGRAM_ENABLED,
27
+ TELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USERS,
28
+ HF_DATASET_NAME
29
+ )
30
  except ImportError:
31
  # Fallback to direct imports if app is not recognized as a package
32
  sys.path.append(os.path.abspath('.'))
33
  from app.core.agent import AssistantAgent
34
  from app.core.ingestion import DocumentProcessor
35
+ from app.core.telegram_bot import TelegramBot
36
+ from app.core.chat_history import ChatHistoryManager
37
  from app.utils.helpers import get_document_path, format_sources, save_conversation, copy_uploaded_file
38
+ from app.config import (
39
+ LLM_MODEL, EMBEDDING_MODEL, TELEGRAM_ENABLED,
40
+ TELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USERS,
41
+ HF_DATASET_NAME
42
+ )
43
 
44
  # Set page config
45
  st.set_page_config(
46
+ page_title="Personal AI Second Brain",
47
+ page_icon="🧠",
48
  layout="wide"
49
  )
50
 
 
88
  return ["dummy-id"]
89
  return DummyProcessor()
90
 
91
+ # Function to initialize chat history manager
92
+ @st.cache_resource
93
+ def get_chat_history_manager():
94
+ logger.info("Initializing ChatHistoryManager")
95
+ try:
96
+ return ChatHistoryManager(dataset_name=HF_DATASET_NAME)
97
+ except Exception as e:
98
+ logger.error(f"Error initializing chat history manager: {e}")
99
+ st.error(f"Could not initialize chat history: {str(e)}")
100
+ # Return a dummy manager as fallback
101
+ class DummyHistoryManager:
102
+ def load_history(self, *args, **kwargs):
103
+ return []
104
+ def save_conversation(self, *args, **kwargs):
105
+ return True
106
+ def sync_to_hub(self, *args, **kwargs):
107
+ return False
108
+ return DummyHistoryManager()
109
+
110
+ # Function to initialize Telegram bot
111
+ @st.cache_resource
112
+ def get_telegram_bot(_agent):
113
+ """Initialize Telegram bot with unhashable agent parameter."""
114
+ if not TELEGRAM_ENABLED or not TELEGRAM_BOT_TOKEN:
115
+ logger.info("Telegram bot disabled or token missing")
116
+ return None
117
+
118
+ logger.info("Initializing Telegram bot")
119
+ try:
120
+ bot = TelegramBot(
121
+ agent=_agent,
122
+ token=TELEGRAM_BOT_TOKEN,
123
+ allowed_user_ids=TELEGRAM_ALLOWED_USERS
124
+ )
125
+ return bot
126
+ except Exception as e:
127
+ logger.error(f"Error initializing Telegram bot: {e}")
128
+ return None
129
+
130
  # Initialize session state variables
131
  if "messages" not in st.session_state:
132
  st.session_state.messages = []
133
+ if "telegram_status" not in st.session_state:
134
+ st.session_state.telegram_status = "Not started"
135
+ if "history_filter" not in st.session_state:
136
+ st.session_state.history_filter = ""
137
+ if "current_tab" not in st.session_state:
138
+ st.session_state.current_tab = "Chat"
139
 
140
+ # Initialize agent and other components with caching
141
  agent = get_agent()
142
  document_processor = get_document_processor(agent)
143
+ chat_history_manager = get_chat_history_manager()
144
+ telegram_bot = get_telegram_bot(agent)
145
 
146
+ # Load initial messages from history
147
+ if not st.session_state.messages:
148
+ try:
149
+ recent_history = chat_history_manager.load_history()
150
+ # Take the last 10 conversations and convert to messages format
151
+ for conv in recent_history[-10:]:
152
+ if "user_query" in conv and "assistant_response" in conv:
153
+ st.session_state.messages.append({"role": "user", "content": conv["user_query"]})
154
+ st.session_state.messages.append({"role": "assistant", "content": conv["assistant_response"]})
155
+ except Exception as e:
156
+ logger.error(f"Error loading initial history: {e}")
157
 
158
+ # Main UI
159
+ st.title("🧠 Personal AI Second Brain")
160
+
161
+ # Create tabs for different functionality
162
+ tabs = st.tabs(["Chat", "Documents", "History", "Settings"])
163
+
164
+ # Chat tab
165
+ with tabs[0]:
166
+ if st.session_state.current_tab != "Chat":
167
+ st.session_state.current_tab = "Chat"
168
+
169
+ # Display chat messages from history
170
+ for message in st.session_state.messages:
171
+ with st.chat_message(message["role"]):
172
+ st.markdown(message["content"])
173
 
174
+ # Accept user input
175
+ if prompt := st.chat_input("Ask me anything..."):
176
+ # Add user message to chat history
177
+ st.session_state.messages.append({"role": "user", "content": prompt})
178
+
179
+ # Display user message in chat
180
+ with st.chat_message("user"):
181
+ st.markdown(prompt)
182
+
183
+ # Generate and display assistant response
184
+ with st.chat_message("assistant"):
185
+ message_placeholder = st.empty()
186
+ message_placeholder.markdown("Thinking...")
187
+
188
+ try:
189
+ response = agent.query(prompt)
190
+ answer = response["answer"]
191
+ sources = response["sources"]
192
+
193
+ # Update the placeholder with the response
194
+ message_placeholder.markdown(answer)
195
+
196
+ # Add assistant response to chat history
197
+ st.session_state.messages.append({"role": "assistant", "content": answer})
198
+
199
+ # Save conversation to history manager
200
+ chat_history_manager.save_conversation({
201
+ "user_query": prompt,
202
+ "assistant_response": answer,
203
+ "sources": [s["source"] for s in sources] if sources else [],
204
+ "timestamp": datetime.now().isoformat()
205
+ })
206
+
207
+ # Display sources if available
208
+ if sources:
209
+ with st.expander("Sources"):
210
+ st.markdown(format_sources(sources))
211
+
212
+ # Add to agent's memory
213
+ agent.add_conversation_to_memory(prompt, answer)
214
+
215
+ except Exception as e:
216
+ logger.error(f"Error generating response: {e}")
217
+ error_message = f"I'm sorry, I encountered an error: {str(e)}"
218
+ message_placeholder.markdown(error_message)
219
+ st.session_state.messages.append({"role": "assistant", "content": error_message})
220
+
221
+ # Documents tab (existing functionality)
222
+ with tabs[1]:
223
+ if st.session_state.current_tab != "Documents":
224
+ st.session_state.current_tab = "Documents"
225
+
226
+ st.header("Upload & Manage Documents")
227
+
228
+ col1, col2 = st.columns(2)
229
+
230
+ with col1:
231
  st.subheader("Upload a File")
232
 
233
  # Show supported file types info
 
334
  if "403" in str(e) or "forbidden" in str(e).lower():
335
  st.warning("This appears to be a permissions issue. Try using a different file format or using the text input option instead.")
336
  elif "unsupported" in str(e).lower() or "not supported" in str(e).lower() or "no specific loader" in str(e).lower():
337
+ st.warning("This file format may not be supported. Try converting to PDF or TXT first.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
338
 
339
+ with col2:
340
+ st.subheader("Add Text Directly")
341
+
342
+ # Text input for adding content directly
343
+ text_content = st.text_area("Enter text to add to your knowledge base:", height=200)
344
+ text_title = st.text_input("Give this text a title:")
345
+
346
+ if st.button("Process Text") and text_content and text_title:
347
+ with st.spinner("Processing text..."):
348
+ status_placeholder = st.empty()
349
+ status_placeholder.info("Processing your text...")
350
+
351
  try:
352
+ # Process the text content
353
+ metadata = {"title": text_title, "source": "direct_input"}
354
+ ids = document_processor.ingest_text(text_content, metadata)
 
 
 
 
 
 
355
 
356
+ if ids:
357
+ status_placeholder.success("✅ Text processed successfully!")
 
 
 
358
  else:
359
+ status_placeholder.warning("⚠️ Text processed with warnings.")
360
  except Exception as e:
361
+ logger.error(f"Error processing text: {str(e)}")
362
+ status_placeholder.error(f"Error processing text: {str(e)}")
363
+
364
+ # History tab (new)
365
+ with tabs[2]:
366
+ if st.session_state.current_tab != "History":
367
+ st.session_state.current_tab = "History"
368
 
369
+ st.header("Chat History")
 
 
 
370
 
371
+ # Search and filtering options
372
+ col1, col2, col3 = st.columns([2, 1, 1])
 
373
 
374
+ with col1:
375
+ search_query = st.text_input("Search conversations:", st.session_state.history_filter)
376
+ if search_query != st.session_state.history_filter:
377
+ st.session_state.history_filter = search_query
378
+
379
+ with col2:
380
+ st.text("Date Range (optional)")
381
+ start_date = st.date_input("Start date", None)
382
+
383
+ with col3:
384
+ st.text("\u00A0") # Non-breaking space for alignment
385
+ end_date = st.date_input("End date", None)
386
+
387
+ # Load and filter history
388
+ try:
389
+ history = chat_history_manager.load_history()
390
 
391
+ # Apply search filter if provided
392
+ if search_query:
393
+ history = chat_history_manager.search_conversations(search_query)
394
+
395
+ # Apply date filtering if provided
396
+ if start_date or end_date:
397
+ # Convert datetime.date to datetime.datetime for filtering
398
+ start_datetime = datetime.combine(start_date, datetime.min.time()) if start_date else None
399
+ end_datetime = datetime.combine(end_date, datetime.max.time()) if end_date else None
400
+ history = chat_history_manager.get_conversations_by_date(start_datetime, end_datetime)
401
+
402
+ # Display history
403
+ if not history:
404
+ st.info("No conversation history found matching your criteria.")
405
+ else:
406
+ # Sort by timestamp (newest first)
407
+ history.sort(key=lambda x: x.get("timestamp", ""), reverse=True)
408
+
409
+ # Create a DataFrame for display
410
+ df = pd.DataFrame(history)
411
+ if not df.empty:
412
+ # Select and rename columns for display
413
+ if all(col in df.columns for col in ["timestamp", "user_query", "assistant_response"]):
414
+ display_df = df[["timestamp", "user_query", "assistant_response"]]
415
+ display_df = display_df.rename(columns={
416
+ "timestamp": "Date",
417
+ "user_query": "Your Question",
418
+ "assistant_response": "AI Response"
419
+ })
420
+
421
+ # Format timestamp
422
+ if "Date" in display_df.columns:
423
+ display_df["Date"] = pd.to_datetime(display_df["Date"]).dt.strftime('%Y-%m-%d %H:%M')
424
+
425
+ # Truncate long text
426
+ for col in ["Your Question", "AI Response"]:
427
+ if col in display_df.columns:
428
+ display_df[col] = display_df[col].apply(lambda x: x[:100] + "..." if isinstance(x, str) and len(x) > 100 else x)
429
+
430
+ # Display as table
431
+ st.dataframe(display_df, use_container_width=True)
432
+
433
+ # Add option to view full conversation
434
+ if not df.empty:
435
+ selected_idx = st.selectbox("Select conversation to view details:",
436
+ range(len(df)),
437
+ format_func=lambda i: f"{df.iloc[i].get('timestamp', 'Unknown')} - {df.iloc[i].get('user_query', '')[:30]}...")
438
+
439
+ if selected_idx is not None:
440
+ selected_conv = df.iloc[selected_idx]
441
+ st.subheader("Conversation Details")
442
+
443
+ st.markdown("**Your Question:**")
444
+ st.markdown(selected_conv.get("user_query", ""))
445
+
446
+ st.markdown("**AI Response:**")
447
+ st.markdown(selected_conv.get("assistant_response", ""))
448
+
449
+ # Display sources if available
450
+ if "sources" in selected_conv and selected_conv["sources"]:
451
+ st.markdown("**Sources:**")
452
+ for src in selected_conv["sources"]:
453
+ st.markdown(f"- {src}")
454
+
455
+ # Option to use this conversation in chat
456
+ if st.button("Continue this conversation"):
457
+ # Add to current chat session
458
+ st.session_state.messages.append({"role": "user", "content": selected_conv.get("user_query", "")})
459
+ st.session_state.messages.append({"role": "assistant", "content": selected_conv.get("assistant_response", "")})
460
+ # Switch to chat tab
461
+ st.session_state.current_tab = "Chat"
462
+ st.experimental_rerun()
463
  else:
464
+ st.error("Unexpected history format. Some columns are missing.")
465
+ else:
466
+ st.info("No conversation history found.")
467
+ except Exception as e:
468
+ logger.error(f"Error displaying history: {e}")
469
+ st.error(f"Error loading conversation history: {str(e)}")
470
+
471
+ # Sync to Hugging Face Hub button
472
+ if HF_DATASET_NAME:
473
+ if st.button("Sync History to Hugging Face Hub"):
474
+ with st.spinner("Syncing history..."):
475
+ success = chat_history_manager.sync_to_hub()
476
+ if success:
477
+ st.success("History successfully synced to Hugging Face Hub!")
478
+ else:
479
+ st.error("Failed to sync history. Check logs for details.")
480
 
481
+ # Settings tab (new)
482
+ with tabs[3]:
483
+ if st.session_state.current_tab != "Settings":
484
+ st.session_state.current_tab = "Settings"
485
 
486
+ st.header("Settings")
 
 
487
 
488
+ # System information
489
+ st.subheader("System Information")
490
+ system_info = {
491
+ "LLM Model": LLM_MODEL,
492
+ "Embedding Model": EMBEDDING_MODEL,
493
+ "HF Dataset": HF_DATASET_NAME or "Not configured",
494
+ "Telegram Enabled": "Yes" if TELEGRAM_ENABLED else "No"
495
+ }
496
+
497
+ for key, value in system_info.items():
498
+ st.markdown(f"**{key}:** {value}")
499
+
500
+ # Telegram settings
501
+ st.subheader("Telegram Integration")
502
+
503
+ telegram_status = "Not configured"
504
+ if telegram_bot:
505
+ telegram_status = st.session_state.telegram_status
506
+
507
+ st.markdown(f"**Status:** {telegram_status}")
508
+
509
+ col1, col2 = st.columns(2)
510
+
511
+ with col1:
512
+ if telegram_bot and st.session_state.telegram_status != "Running":
513
+ if st.button("Start Telegram Bot"):
 
 
 
 
 
 
 
 
 
514
  try:
515
+ success = telegram_bot.start()
516
+ if success:
517
+ st.session_state.telegram_status = "Running"
518
+ st.success("Telegram bot started!")
519
+ else:
520
+ st.error("Failed to start Telegram bot. Check logs for details.")
521
+ except Exception as e:
522
+ logger.error(f"Error starting Telegram bot: {e}")
523
+ st.error(f"Error: {str(e)}")
524
+
525
+ with col2:
526
+ if telegram_bot and st.session_state.telegram_status == "Running":
527
+ if st.button("Stop Telegram Bot"):
528
  try:
529
+ telegram_bot.stop()
530
+ st.session_state.telegram_status = "Stopped"
531
+ st.info("Telegram bot stopped.")
532
+ except Exception as e:
533
+ logger.error(f"Error stopping Telegram bot: {e}")
534
+ st.error(f"Error: {str(e)}")
535
+
536
+ if telegram_bot:
537
+ with st.expander("Telegram Bot Settings"):
538
+ st.markdown("""
539
+ To configure the Telegram bot, set these environment variables:
540
+ - `TELEGRAM_ENABLED`: Set to `true` to enable the bot
541
+ - `TELEGRAM_BOT_TOKEN`: Your Telegram bot token
542
+ - `TELEGRAM_ALLOWED_USERS`: Comma-separated list of Telegram user IDs (optional)
543
+ """)
544
+
545
+ if telegram_bot.allowed_user_ids:
546
+ st.markdown("**Allowed User IDs:**")
547
+ for user_id in telegram_bot.allowed_user_ids:
548
+ st.markdown(f"- {user_id}")
549
+ else:
550
+ st.markdown("The bot will respond to all users (no user restrictions configured).")
551
 
552
+ # Show Telegram bot instructions
553
+ st.markdown("### Telegram Bot Commands")
554
+ st.markdown("""
555
+ - **/start**: Start a conversation with the bot
556
+ - **/help**: Shows available commands
557
+ - **/search**: Use `/search your query` to search your knowledge base
558
+ - **Direct messages**: Send any message to chat with your second brain
559
+
560
+ #### How to Set Up Your Telegram Bot
561
+ 1. Talk to [@BotFather](https://t.me/botfather) on Telegram
562
+ 2. Use the `/newbot` command to create a new bot
563
+ 3. Get your bot token and add it to your `.env` file
564
+ 4. Set `TELEGRAM_ENABLED=true` in your `.env` file
565
+ 5. To find your Telegram user ID, talk to [@userinfobot](https://t.me/userinfobot)
566
+ """)
567
+ else:
568
+ st.info("Telegram integration is not enabled. Configure your .env file to enable it.")
569
+
570
+ # Settings for Hugging Face Dataset persistence
571
+ st.subheader("Hugging Face Dataset Settings")
572
+
573
+ if HF_DATASET_NAME:
574
+ st.markdown(f"**Dataset Name:** {HF_DATASET_NAME}")
575
+ st.markdown(f"**Local History File:** {chat_history_manager.local_file}")
576
+
577
+ # HF Dataset instructions
578
+ with st.expander("Setup Instructions"):
579
+ st.markdown("""
580
+ ### Setting up Hugging Face Dataset Persistence
581
+
582
+ 1. Create a private dataset repository on Hugging Face Hub
583
+ 2. Set your API token in the `.env` file as `HF_API_KEY`
584
+ 3. Set your dataset name as `HF_DATASET_NAME` (format: username/repo-name)
585
+
586
+ Your chat history will be automatically synced to the Hub.
587
+ """)
588
+ else:
589
+ st.info("Hugging Face Dataset persistence is not configured. Set HF_DATASET_NAME in your .env file.")
590
 
591
+ # Run Telegram bot on startup if enabled
592
+ if telegram_bot and TELEGRAM_ENABLED and st.session_state.telegram_status == "Not started":
593
+ try:
594
+ success = telegram_bot.start()
595
+ if success:
596
+ st.session_state.telegram_status = "Running"
597
+ logger.info("Telegram bot started automatically")
598
+ except Exception as e:
599
+ logger.error(f"Error auto-starting Telegram bot: {e}")
600
+ st.session_state.telegram_status = "Error"
601
 
602
  if __name__ == "__main__":
603
  # This is used when running the file directly
deploy_fixes.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Deploy fixes for AI responses and file uploading to Hugging Face Spaces
4
+ """
5
+ import os
6
+ import subprocess
7
+ import sys
8
+ import time
9
+ from getpass import getpass
10
+
11
+ def deploy_fixes():
12
+ """Deploy fixes to Hugging Face Space."""
13
+ print("=" * 60)
14
+ print(" Deploy AI Response and File Upload Fixes to Hugging Face Space")
15
+ print("=" * 60)
16
+
17
+ # Get credentials
18
+ username = input("Enter your Hugging Face username: ")
19
+ token = getpass("Enter your Hugging Face token: ")
20
+ space_name = input("Enter your Space name: ")
21
+
22
+ # Set environment variables
23
+ os.environ["HUGGINGFACEHUB_API_TOKEN"] = token
24
+ os.environ["HF_API_KEY"] = token
25
+
26
+ # Create a commit message describing the changes
27
+ commit_message = """
28
+ Fix AI responses and file uploading functionality
29
+
30
+ - Improved AI responses with better prompt formatting and instructions
31
+ - Enhanced file upload handling with better error recovery
32
+ - Added support for more file types (docx, html, md, etc.)
33
+ - Improved UI with progress tracking and better error messages
34
+ - Fixed edge cases with empty files and error handling
35
+ """
36
+
37
+ # Add the remote URL with credentials embedded
38
+ remote_url = f"https://{username}:{token}@huggingface.co/spaces/{username}/{space_name}"
39
+
40
+ try:
41
+ print("\n1. Configuring Git repository...")
42
+ # Configure git remote
43
+ remotes = subprocess.run(["git", "remote"], capture_output=True, text=True).stdout.strip().split('\n')
44
+ if "hf" not in remotes:
45
+ subprocess.run(["git", "remote", "add", "hf", remote_url], check=True)
46
+ print(" Added 'hf' remote.")
47
+ else:
48
+ subprocess.run(["git", "remote", "set-url", "hf", remote_url], check=True)
49
+ print(" Updated 'hf' remote.")
50
+
51
+ print("\n2. Pulling latest changes...")
52
+ try:
53
+ # Try to pull any changes
54
+ subprocess.run(["git", "pull", "hf", "main"], check=True)
55
+ print(" Successfully pulled latest changes.")
56
+ except subprocess.CalledProcessError:
57
+ print(" Warning: Could not pull latest changes. Will attempt to push anyway.")
58
+
59
+ print("\n3. Staging changes...")
60
+ # Stage all files
61
+ subprocess.run(["git", "add", "app/core/memory.py", "app/core/ingestion.py", "app/ui/streamlit_app.py"], check=True)
62
+
63
+ print("\n4. Committing changes...")
64
+ # Commit changes
65
+ try:
66
+ subprocess.run(["git", "commit", "-m", commit_message], check=True)
67
+ print(" Changes committed successfully.")
68
+ except subprocess.CalledProcessError:
69
+ # Check if there are changes to commit
70
+ status = subprocess.run(["git", "status", "--porcelain"], capture_output=True, text=True).stdout.strip()
71
+ if not status:
72
+ print(" No changes to commit.")
73
+ else:
74
+ print(" Error making commit. Will try to push existing commits.")
75
+
76
+ print("\n5. Pushing changes to Hugging Face Space...")
77
+ # Multiple push strategies
78
+ push_success = False
79
+
80
+ # Strategy 1: Standard push
81
+ try:
82
+ subprocess.run(["git", "push", "hf", "main"], check=True)
83
+ push_success = True
84
+ print(" Push successful!")
85
+ except subprocess.CalledProcessError as e:
86
+ print(f" Standard push failed: {e}")
87
+ print(" Trying force push...")
88
+
89
+ # Strategy 2: Force push
90
+ try:
91
+ time.sleep(1) # Brief pause
92
+ subprocess.run(["git", "push", "-f", "hf", "main"], check=True)
93
+ push_success = True
94
+ print(" Force push successful!")
95
+ except subprocess.CalledProcessError as e:
96
+ print(f" Force push failed: {e}")
97
+ print(" Trying alternative push approach...")
98
+
99
+ # Strategy 3: Set upstream and force push
100
+ try:
101
+ time.sleep(1) # Brief pause
102
+ subprocess.run(["git", "push", "-f", "--set-upstream", "hf", "main"], check=True)
103
+ push_success = True
104
+ print(" Alternative push successful!")
105
+ except subprocess.CalledProcessError as e:
106
+ print(f" Alternative push failed: {e}")
107
+
108
+ if push_success:
109
+ print("\n✅ Success! Your fixes have been deployed to Hugging Face Space.")
110
+ print(f" View your Space at: https://huggingface.co/spaces/{username}/{space_name}")
111
+ print(" Note: It may take a few minutes for changes to appear as the Space rebuilds.")
112
+ return True
113
+ else:
114
+ print("\n❌ All push attempts failed. Please check the error messages above.")
115
+ return False
116
+ except Exception as e:
117
+ print(f"\n❌ Unexpected error during deployment: {e}")
118
+ return False
119
+
120
+ if __name__ == "__main__":
121
+ try:
122
+ result = deploy_fixes()
123
+ if result:
124
+ print("\nDeployment completed successfully.")
125
+ else:
126
+ print("\nDeployment failed. Please try again or deploy manually.")
127
+ sys.exit(1)
128
+ except KeyboardInterrupt:
129
+ print("\nDeployment cancelled by user.")
130
+ sys.exit(1)
direct_upload.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Direct uploader for Hugging Face Spaces - simpler approach
4
+ """
5
+ import os
6
+ import sys
7
+ import requests
8
+ from getpass import getpass
9
+
10
+ def upload_file(file_path, space_id, token):
11
+ """Upload a single file to Hugging Face Space using API"""
12
+ # Read the file content
13
+ with open(file_path, 'rb') as f:
14
+ file_content = f.read()
15
+
16
+ # Construct the API URL
17
+ api_url = f"https://huggingface.co/api/spaces/{space_id}/upload/{file_path}"
18
+
19
+ # Set headers
20
+ headers = {
21
+ "Authorization": f"Bearer {token}"
22
+ }
23
+
24
+ # Upload the file
25
+ response = requests.post(
26
+ api_url,
27
+ headers=headers,
28
+ files={"file": (os.path.basename(file_path), file_content)}
29
+ )
30
+
31
+ # Check response
32
+ if response.status_code == 200:
33
+ return True
34
+ else:
35
+ print(f"Error: {response.status_code} - {response.text}")
36
+ return False
37
+
38
+ def main():
39
+ print("=" * 60)
40
+ print(" Direct File Upload to Hugging Face Space")
41
+ print("=" * 60)
42
+
43
+ # Get required information
44
+ print("\nPlease enter your Hugging Face details:")
45
+ username = input("Username: ")
46
+ token = getpass("Access Token: ")
47
+
48
+ # Space name with validation
49
+ while True:
50
+ space_name = input("Space Name (without username prefix): ")
51
+ if "/" in space_name:
52
+ print("Error: Please enter just the Space name without username or slashes")
53
+ else:
54
+ break
55
+
56
+ # Construct full space ID
57
+ space_id = f"{username}/{space_name}"
58
+
59
+ # Validate the space exists
60
+ print(f"\nValidating Space: {space_id}")
61
+ validation_url = f"https://huggingface.co/api/spaces/{space_id}"
62
+ headers = {"Authorization": f"Bearer {token}"}
63
+
64
+ try:
65
+ validation_response = requests.get(validation_url, headers=headers)
66
+ if validation_response.status_code != 200:
67
+ print(f"Error: Space '{space_id}' not found or not accessible.")
68
+ print(f"Please check the Space name and your permissions.")
69
+ print(f"Space URL would be: https://huggingface.co/spaces/{space_id}")
70
+ return False
71
+ else:
72
+ print(f"✅ Space found! URL: https://huggingface.co/spaces/{space_id}")
73
+ except Exception as e:
74
+ print(f"Error validating space: {e}")
75
+ return False
76
+
77
+ # Files to upload
78
+ files_to_upload = [
79
+ "app/core/memory.py",
80
+ "app/core/ingestion.py",
81
+ "app/ui/streamlit_app.py"
82
+ ]
83
+
84
+ # Verify files exist locally
85
+ missing_files = [f for f in files_to_upload if not os.path.exists(f)]
86
+ if missing_files:
87
+ print(f"Error: The following files don't exist locally: {missing_files}")
88
+ return False
89
+
90
+ # Upload each file
91
+ print("\nUploading files:")
92
+ success_count = 0
93
+
94
+ for file_path in files_to_upload:
95
+ print(f"📤 Uploading {file_path}... ", end="", flush=True)
96
+ if upload_file(file_path, space_id, token):
97
+ print("✅ Success!")
98
+ success_count += 1
99
+ else:
100
+ print("❌ Failed!")
101
+
102
+ # Print summary
103
+ print(f"\nUpload summary: {success_count}/{len(files_to_upload)} files uploaded successfully.")
104
+
105
+ if success_count == len(files_to_upload):
106
+ print("\n✅ All files uploaded successfully!")
107
+ print(f"View your Space at: https://huggingface.co/spaces/{space_id}")
108
+ print("Note: It may take a few minutes for your Space to rebuild with the new changes.")
109
+ return True
110
+ else:
111
+ print("\n⚠️ Some files failed to upload. Please check the errors above.")
112
+ return False
113
+
114
+ if __name__ == "__main__":
115
+ try:
116
+ # Check for required packages
117
+ try:
118
+ import requests
119
+ except ImportError:
120
+ print("Installing required package: requests")
121
+ import subprocess
122
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "requests"])
123
+ import requests
124
+
125
+ # Run the main function
126
+ success = main()
127
+ if success:
128
+ sys.exit(0)
129
+ else:
130
+ sys.exit(1)
131
+ except KeyboardInterrupt:
132
+ print("\nUpload cancelled by user.")
133
+ sys.exit(1)
134
+ except Exception as e:
135
+ print(f"\nUnexpected error: {e}")
136
+ sys.exit(1)
push_to_huggingface.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Push fixes directly to Hugging Face using the Hub API for more reliable authentication
4
+ """
5
+ import os
6
+ import sys
7
+ import tempfile
8
+ import shutil
9
+ from getpass import getpass
10
+ from huggingface_hub import HfApi, create_repo
11
+
12
+ def push_fixes():
13
+ print("=" * 60)
14
+ print(" Push AI Response and File Upload Fixes to Hugging Face Space")
15
+ print("=" * 60)
16
+
17
+ # Get credentials
18
+ username = input("Enter your Hugging Face username: ")
19
+ token = getpass("Enter your Hugging Face token: ")
20
+ space_name = input("Enter your Space name (just the name, not including your username): ")
21
+
22
+ try:
23
+ # Initialize the Hugging Face API
24
+ api = HfApi(token=token)
25
+
26
+ # Print user info to confirm authentication
27
+ print("\nAuthenticating with Hugging Face...")
28
+ user_info = api.whoami()
29
+ print(f"Authenticated as: {user_info['name']} (@{user_info['fullname']})")
30
+
31
+ # Space repository ID
32
+ repo_id = f"{username}/{space_name}"
33
+ print(f"\nPreparing to update Space: {repo_id}")
34
+ print(f"Space URL: https://huggingface.co/spaces/{repo_id}")
35
+
36
+ # Create a list of files to upload
37
+ files_to_upload = [
38
+ "app/core/memory.py",
39
+ "app/core/ingestion.py",
40
+ "app/ui/streamlit_app.py"
41
+ ]
42
+
43
+ # Upload each file
44
+ print("\nUploading files:")
45
+ for file_path in files_to_upload:
46
+ try:
47
+ print(f" - Uploading {file_path}...")
48
+ api.upload_file(
49
+ path_or_fileobj=file_path,
50
+ path_in_repo=file_path,
51
+ repo_id=repo_id,
52
+ repo_type="space",
53
+ commit_message=f"Fix: Improve {os.path.basename(file_path)} for better AI responses and file uploads"
54
+ )
55
+ print(f" Success!")
56
+ except Exception as e:
57
+ print(f" Error uploading {file_path}: {str(e)}")
58
+ return False
59
+
60
+ print("\n✅ All files uploaded successfully!")
61
+ print(f"View your Space at: https://huggingface.co/spaces/{username}/{space_name}")
62
+ print("Note: It may take a few minutes for your Space to rebuild with the new changes.")
63
+ return True
64
+
65
+ except Exception as e:
66
+ print(f"\n❌ Error: {str(e)}")
67
+ return False
68
+
69
+ if __name__ == "__main__":
70
+ # Check if huggingface_hub is installed
71
+ try:
72
+ import huggingface_hub
73
+ except ImportError:
74
+ print("Error: huggingface_hub package is not installed.")
75
+ print("Installing huggingface_hub...")
76
+ import subprocess
77
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "huggingface_hub"])
78
+ print("huggingface_hub installed. Please run the script again.")
79
+ sys.exit(1)
80
+
81
+ # Run the push function
82
+ if push_fixes():
83
+ print("\nPush completed successfully.")
84
+ else:
85
+ print("\nPush failed. Please check the error messages above.")
86
+ sys.exit(1)
requirements.txt CHANGED
@@ -1,15 +1,19 @@
1
- langchain==0.1.3
2
- langchain-community==0.0.16
3
- huggingface-hub==0.20.2
4
- transformers==4.36.2
5
- sentence-transformers==2.2.2
6
- numpy==1.26.3
7
- qdrant-client==1.7.0
8
- fastapi==0.104.1
9
- uvicorn==0.24.0
10
- python-dotenv==1.0.0
11
- pydantic==2.5.2
12
- tiktoken==0.5.2
13
- pypdf==3.17.1
14
- streamlit==1.29.0
15
- torch==2.1.2
 
 
 
 
 
1
+ langchain>=0.1.0
2
+ langchain-community>=0.0.10
3
+ sentence-transformers>=2.2.2
4
+ streamlit>=1.28.1
5
+ qdrant-client>=1.6.3
6
+ transformers>=4.34.1
7
+ accelerate>=0.25.0
8
+ torch>=2.0.0
9
+ tqdm>=4.66.1
10
+ python-dotenv>=1.0.0
11
+ pydantic>=2.4.2
12
+ fastapi>=0.104.1
13
+ uvicorn>=0.24.0
14
+ Pillow>=10.1.0
15
+ docx2txt>=0.8
16
+ unstructured>=0.10.30
17
+ python-telegram-bot>=20.6
18
+ datasets>=2.15.0
19
+ huggingface_hub>=0.19.0
update_imports.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Update deprecated langchain imports to langchain_community in project files
4
+ """
5
+ import os
6
+ import re
7
+
8
+ def update_imports(file_path):
9
+ """Update imports in a single file"""
10
+ print(f"Processing {file_path}")
11
+
12
+ with open(file_path, 'r', encoding='utf-8') as file:
13
+ content = file.read()
14
+
15
+ # Define import replacements
16
+ replacements = [
17
+ (r'from langchain\.vectorstores import (.*)', r'from langchain_community.vectorstores import \1'),
18
+ (r'from langchain\.llms import (.*)', r'from langchain_community.llms import \1'),
19
+ (r'from langchain\.document_loaders import (.*)', r'from langchain_community.document_loaders import \1'),
20
+ (r'from langchain\.embeddings import (.*)', r'from langchain_community.embeddings import \1'),
21
+ (r'import langchain\.vectorstores', r'import langchain_community.vectorstores'),
22
+ (r'import langchain\.llms', r'import langchain_community.llms'),
23
+ (r'import langchain\.document_loaders', r'import langchain_community.document_loaders'),
24
+ (r'import langchain\.embeddings', r'import langchain_community.embeddings'),
25
+ ]
26
+
27
+ # Apply all replacements
28
+ for pattern, replacement in replacements:
29
+ content = re.sub(pattern, replacement, content)
30
+
31
+ # Write updated content back to the file
32
+ with open(file_path, 'w', encoding='utf-8') as file:
33
+ file.write(content)
34
+
35
+ return True
36
+
37
+ def main():
38
+ """Main function to update all files"""
39
+ # Files to update
40
+ files_to_update = [
41
+ 'app/core/memory.py',
42
+ 'app/core/llm.py',
43
+ 'app/core/ingestion.py',
44
+ 'app/core/agent.py'
45
+ ]
46
+
47
+ updated_count = 0
48
+ for file_path in files_to_update:
49
+ if os.path.exists(file_path):
50
+ success = update_imports(file_path)
51
+ if success:
52
+ updated_count += 1
53
+ else:
54
+ print(f"File not found: {file_path}")
55
+
56
+ print(f"\nCompleted! Updated {updated_count}/{len(files_to_update)} files.")
57
+
58
+ if __name__ == "__main__":
59
+ main()
upload_with_commit.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Hugging Face Space Uploader using the new commit endpoint
4
+ """
5
+ import os
6
+ import sys
7
+ import json
8
+ import base64
9
+ import requests
10
+ from getpass import getpass
11
+
12
+ def upload_files_commit(space_id, token, files_to_upload):
13
+ """Upload files using the commit endpoint"""
14
+ # API URL for commits
15
+ api_url = f"https://huggingface.co/api/spaces/{space_id}/commit"
16
+
17
+ # Headers
18
+ headers = {
19
+ "Authorization": f"Bearer {token}",
20
+ "Content-Type": "application/json"
21
+ }
22
+
23
+ # Prepare operations (files to add/modify)
24
+ operations = []
25
+ for file_path in files_to_upload:
26
+ # Read the file content and base64 encode it
27
+ with open(file_path, 'rb') as f:
28
+ content = f.read()
29
+ content_b64 = base64.b64encode(content).decode("ascii")
30
+
31
+ # Add the operation
32
+ operations.append({
33
+ "operation": "addOrUpdate",
34
+ "path": file_path,
35
+ "content": content_b64,
36
+ "encoding": "base64"
37
+ })
38
+
39
+ # Prepare the commit payload
40
+ payload = {
41
+ "operations": operations,
42
+ "commit_message": "Fix AI responses and file upload handling"
43
+ }
44
+
45
+ # Make the API request
46
+ response = requests.post(api_url, headers=headers, json=payload)
47
+
48
+ # Check response
49
+ if response.status_code == 200:
50
+ return True, "Commit successful"
51
+ else:
52
+ return False, f"Error: {response.status_code} - {response.text}"
53
+
54
+ def main():
55
+ print("=" * 60)
56
+ print(" Hugging Face Space File Uploader (Commit Method)")
57
+ print("=" * 60)
58
+
59
+ # Get required information
60
+ print("\nPlease enter your Hugging Face details:")
61
+ username = input("Username: ")
62
+ token = getpass("Access Token: ")
63
+
64
+ # Space name with validation
65
+ while True:
66
+ space_name = input("Space Name (without username prefix): ")
67
+ if "/" in space_name:
68
+ print("Error: Please enter just the Space name without username or slashes")
69
+ else:
70
+ break
71
+
72
+ # Construct full space ID
73
+ space_id = f"{username}/{space_name}"
74
+
75
+ # Validate the space exists
76
+ print(f"\nValidating Space: {space_id}")
77
+ validation_url = f"https://huggingface.co/api/spaces/{space_id}"
78
+ headers = {"Authorization": f"Bearer {token}"}
79
+
80
+ try:
81
+ validation_response = requests.get(validation_url, headers=headers)
82
+ if validation_response.status_code != 200:
83
+ print(f"Error: Space '{space_id}' not found or not accessible.")
84
+ print(f"Please check the Space name and your permissions.")
85
+ print(f"Space URL would be: https://huggingface.co/spaces/{space_id}")
86
+ return False
87
+ else:
88
+ space_info = validation_response.json()
89
+ print(f"✅ Space found: {space_info.get('title', space_id)}")
90
+ print(f"URL: https://huggingface.co/spaces/{space_id}")
91
+ except Exception as e:
92
+ print(f"Error validating space: {e}")
93
+ return False
94
+
95
+ # Files to upload
96
+ files_to_upload = [
97
+ "app/core/memory.py",
98
+ "app/core/ingestion.py",
99
+ "app/ui/streamlit_app.py"
100
+ ]
101
+
102
+ # Verify files exist locally
103
+ missing_files = [f for f in files_to_upload if not os.path.exists(f)]
104
+ if missing_files:
105
+ print(f"Error: The following files don't exist locally: {missing_files}")
106
+ return False
107
+
108
+ # Display summary before upload
109
+ print("\nPreparing to upload these files:")
110
+ for file_path in files_to_upload:
111
+ file_size = os.path.getsize(file_path) / 1024 # KB
112
+ print(f" - {file_path} ({file_size:.1f} KB)")
113
+
114
+ # Confirm upload
115
+ confirm = input("\nProceed with upload? (y/n): ").lower()
116
+ if confirm != 'y':
117
+ print("Upload cancelled by user.")
118
+ return False
119
+
120
+ # Upload files with commit
121
+ print("\n📤 Uploading files in a single commit... ", end="", flush=True)
122
+ success, message = upload_files_commit(space_id, token, files_to_upload)
123
+
124
+ if success:
125
+ print("✅ Success!")
126
+ print("\n✅ All files uploaded successfully!")
127
+ print(f"View your Space at: https://huggingface.co/spaces/{space_id}")
128
+ print("Note: It may take a few minutes for your Space to rebuild with the new changes.")
129
+ return True
130
+ else:
131
+ print("❌ Failed!")
132
+ print(f"Error: {message}")
133
+ return False
134
+
135
+ if __name__ == "__main__":
136
+ try:
137
+ # Check for required packages
138
+ try:
139
+ import requests
140
+ except ImportError:
141
+ print("Installing required package: requests")
142
+ import subprocess
143
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "requests"])
144
+ import requests
145
+
146
+ # Run the main function
147
+ success = main()
148
+ if success:
149
+ sys.exit(0)
150
+ else:
151
+ sys.exit(1)
152
+ except KeyboardInterrupt:
153
+ print("\nUpload cancelled by user.")
154
+ sys.exit(1)
155
+ except Exception as e:
156
+ print(f"\nUnexpected error: {e}")
157
+ sys.exit(1)
upload_with_hf_lib.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ """
3
+ Upload files to Hugging Face Space using the official huggingface_hub library
4
+ """
5
+ import os
6
+ import sys
7
+ from getpass import getpass
8
+
9
+ def main():
10
+ print("=" * 60)
11
+ print(" Upload to Hugging Face Space using huggingface_hub")
12
+ print("=" * 60)
13
+
14
+ try:
15
+ # Import huggingface_hub
16
+ from huggingface_hub import HfApi, login
17
+
18
+ # Get required information
19
+ print("\nPlease enter your Hugging Face details:")
20
+ username = input("Username: ")
21
+ token = getpass("Access Token: ")
22
+
23
+ # Login using the token
24
+ login(token=token, add_to_git_credential=True)
25
+
26
+ # Space name with validation
27
+ while True:
28
+ space_name = input("Space Name (without username prefix): ")
29
+ if "/" in space_name:
30
+ print("Error: Please enter just the Space name without username or slashes")
31
+ else:
32
+ break
33
+
34
+ # Construct full space ID
35
+ space_id = f"{username}/{space_name}"
36
+
37
+ # Initialize the API
38
+ api = HfApi()
39
+
40
+ # Validate the user is logged in
41
+ try:
42
+ user_info = api.whoami()
43
+ print(f"\nAuthenticated as: {user_info['name']}")
44
+ except Exception as e:
45
+ print(f"Error authenticating: {e}")
46
+ return False
47
+
48
+ # Files to upload
49
+ files_to_upload = [
50
+ "app/core/memory.py",
51
+ "app/core/ingestion.py",
52
+ "app/ui/streamlit_app.py"
53
+ ]
54
+
55
+ # Verify files exist locally
56
+ missing_files = [f for f in files_to_upload if not os.path.exists(f)]
57
+ if missing_files:
58
+ print(f"Error: The following files don't exist locally: {missing_files}")
59
+ return False
60
+
61
+ # Display summary before upload
62
+ print("\nPreparing to upload these files:")
63
+ for file_path in files_to_upload:
64
+ file_size = os.path.getsize(file_path) / 1024 # KB
65
+ print(f" - {file_path} ({file_size:.1f} KB)")
66
+
67
+ # Confirm upload
68
+ confirm = input("\nProceed with upload? (y/n): ").lower()
69
+ if confirm != 'y':
70
+ print("Upload cancelled by user.")
71
+ return False
72
+
73
+ # Upload each file
74
+ print("\nUploading files:")
75
+ success_count = 0
76
+
77
+ for file_path in files_to_upload:
78
+ try:
79
+ print(f"📤 Uploading {file_path}... ", end="", flush=True)
80
+
81
+ # Upload the file
82
+ api.upload_file(
83
+ path_or_fileobj=file_path,
84
+ path_in_repo=file_path,
85
+ repo_id=space_id,
86
+ repo_type="space",
87
+ commit_message=f"Fix: Improve {os.path.basename(file_path)} for better responses and file handling"
88
+ )
89
+
90
+ print("✅ Success!")
91
+ success_count += 1
92
+ except Exception as e:
93
+ print("❌ Failed!")
94
+ print(f" Error: {str(e)}")
95
+
96
+ # Print summary
97
+ print(f"\nUpload summary: {success_count}/{len(files_to_upload)} files uploaded successfully.")
98
+
99
+ if success_count == len(files_to_upload):
100
+ print("\n✅ All files uploaded successfully!")
101
+ print(f"View your Space at: https://huggingface.co/spaces/{space_id}")
102
+ print("Note: It may take a few minutes for your Space to rebuild with the new changes.")
103
+ return True
104
+ else:
105
+ print("\n⚠️ Some files failed to upload. Please check the errors above.")
106
+ print("You may need to upload the files manually through the web interface.")
107
+ print(f"Go to: https://huggingface.co/spaces/{space_id}/tree/main")
108
+ return False
109
+
110
+ except Exception as e:
111
+ print(f"\n❌ Error: {str(e)}")
112
+ return False
113
+
114
+ if __name__ == "__main__":
115
+ try:
116
+ # Check if huggingface_hub is installed
117
+ try:
118
+ import huggingface_hub
119
+ except ImportError:
120
+ print("Installing required package: huggingface_hub")
121
+ import subprocess
122
+ subprocess.check_call([sys.executable, "-m", "pip", "install", "huggingface_hub"])
123
+ import huggingface_hub
124
+
125
+ # Run the main function
126
+ success = main()
127
+ if success:
128
+ sys.exit(0)
129
+ else:
130
+ sys.exit(1)
131
+
132
+ except KeyboardInterrupt:
133
+ print("\nUpload cancelled by user.")
134
+ sys.exit(1)
135
+ except Exception as e:
136
+ print(f"\nUnexpected error: {e}")
137
+ sys.exit(1)