File size: 6,073 Bytes
df2b222 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
---
title: ShallowCodeResearch
emoji: π
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
short_description: Coding research assistant that generates code and tests it
tags:
- mcp
- multi-agent
- research
- code-generation
- ai-assistant
- gradio
- python
- web-search
- llm
- modal
python_version: "3.12"
---
---
# MCP Hub - Multi-Agent AI Research & Code Assistant
π **Advanced multi-agent system for AI-powered research and code generation**
## What is MCP Hub?
MCP Hub is a sophisticated multi-agent research and code assistant built using Gradio's Model Context Protocol (MCP) server functionality. It orchestrates specialized AI agents to provide comprehensive research capabilities and generate executable Python code.
## β¨ Key Features
- π§ **Multi-Agent Architecture**: Specialized agents working in orchestrated workflows
- π **Intelligent Research**: Web search with automatic summarization and citation formatting
- π» **Code Generation**: Context-aware Python code creation with secure execution
- π **MCP Server**: Built-in MCP server for seamless agent communication
- π― **Multiple LLM Support**: Compatible with Nebius, OpenAI, Anthropic, and HuggingFace
- π‘οΈ **Secure Execution**: Modal sandbox environment for safe code execution
- π **Performance Monitoring**: Advanced metrics collection and health monitoring
## π Quick Start
1. **Configure your environment** by setting up API keys in the Settings tab
2. **Choose your LLM provider** (Nebius recommended for best performance)
3. **Input your research query** in the Orchestrator Flow tab
4. **Watch the magic happen** as agents collaborate to research and generate code
## ποΈ Architecture
### Core Agents
- **Question Enhancer**: Breaks down complex queries into focused sub-questions
- **Web Search Agent**: Performs targeted searches using Tavily API
- **LLM Processor**: Handles text processing, summarization, and analysis
- **Citation Formatter**: Manages academic citation formatting (APA style)
- **Code Generator**: Creates contextually-aware Python code
- **Code Runner**: Executes code in secure Modal sandboxes
- **Orchestrator**: Coordinates the complete workflow
### Workflow Example
```
User Query: "Create Python code to analyze Twitter sentiment"
β
Question Enhancement: Split into focused sub-questions
β
Web Research: Search for Twitter APIs, sentiment libraries, examples
β
Context Integration: Combine research into comprehensive context
β
Code Generation: Create executable Python script
β
Secure Execution: Run code in Modal sandbox
β
Results: Code + output + research summary + citations
```
## π οΈ Setup Requirements
### Required API Keys
- **LLM Provider** (choose one):
- Nebius API (recommended)
- OpenAI API
- Anthropic API
- HuggingFace Inference API
- **Tavily API** (for web search)
- **Modal Account** (for code execution)
### Environment Configuration
Set these environment variables or configure in the app:
```bash
LLM_PROVIDER=nebius # Your chosen provider
NEBIUS_API_KEY=your_key_here
TAVILY_API_KEY=your_key_here
# Modal setup handled automatically
```
## π― Use Cases
### Research & Development
- **Academic Research**: Automated literature review and citation management
- **Technical Documentation**: Generate comprehensive guides with current information
- **Market Analysis**: Research trends and generate analytical reports
### Code Generation
- **Prototype Development**: Rapidly create functional code based on requirements
- **API Integration**: Generate code for working with various APIs and services
- **Data Analysis**: Create scripts for data processing and visualization
### Learning & Education
- **Code Examples**: Generate educational code samples with explanations
- **Concept Exploration**: Research and understand complex programming concepts
- **Best Practices**: Learn current industry standards and methodologies
## π§ Advanced Features
### Performance Monitoring
- Real-time metrics collection
- Response time tracking
- Success rate monitoring
- Resource usage analytics
### Intelligent Caching
- Reduces redundant API calls
- Improves response times
- Configurable TTL settings
### Fault Tolerance
- Circuit breaker protection
- Rate limiting management
- Graceful error handling
- Automatic retry mechanisms
### Sandbox Pool Management
- Pre-warmed execution environments
- Optimized performance
- Resource pooling
- Automatic scaling
## π± Interface Tabs
1. **Orchestrator Flow**: Complete end-to-end workflow
2. **Individual Agents**: Access each agent separately for specific tasks
3. **Advanced Features**: System monitoring and performance analytics
## π€ MCP Integration
This application demonstrates advanced MCP (Model Context Protocol) implementation:
- **Server Architecture**: Full MCP server with schema generation
- **Function Registry**: Proper MCP function definitions with typing
- **Multi-Agent Communication**: Structured data flow between agents
- **Error Handling**: Robust error management across agent interactions
## π Performance
- **Response Times**: Optimized for sub-second agent responses
- **Scalability**: Handles concurrent requests efficiently
- **Reliability**: Built-in fault tolerance and monitoring
- **Resource Management**: Intelligent caching and pooling
## π Technical Details
- **Python**: 3.12+ required
- **Framework**: Gradio with MCP server capabilities
- **Execution**: Modal for secure sandboxed code execution
- **Search**: Tavily API for real-time web research
- **Monitoring**: Comprehensive performance and health tracking
---
**Ready to experience the future of AI-assisted research and development?**
Start by configuring your API keys and dive into the world of multi-agent AI collaboration! π
|