metadata
title: ShallowCodeResearch
emoji: π
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 5.33.0
app_file: app.py
pinned: false
short_description: Coding research assistant that generates code and tests it
tags:
- mcp
- multi-agent
- research
- code-generation
- ai-assistant
- gradio
- python
- web-search
- llm
- modal
python_version: '3.12'
MCP Hub - Multi-Agent AI Research & Code Assistant
π Advanced multi-agent system for AI-powered research and code generation
What is MCP Hub?
MCP Hub is a sophisticated multi-agent research and code assistant built using Gradio's Model Context Protocol (MCP) server functionality. It orchestrates specialized AI agents to provide comprehensive research capabilities and generate executable Python code.
β¨ Key Features
- π§ Multi-Agent Architecture: Specialized agents working in orchestrated workflows
- π Intelligent Research: Web search with automatic summarization and citation formatting
- π» Code Generation: Context-aware Python code creation with secure execution
- π MCP Server: Built-in MCP server for seamless agent communication
- π― Multiple LLM Support: Compatible with Nebius, OpenAI, Anthropic, and HuggingFace
- π‘οΈ Secure Execution: Modal sandbox environment for safe code execution
- π Performance Monitoring: Advanced metrics collection and health monitoring
π Quick Start
- Configure your environment by setting up API keys in the Settings tab
- Choose your LLM provider (Nebius recommended for best performance)
- Input your research query in the Orchestrator Flow tab
- Watch the magic happen as agents collaborate to research and generate code
ποΈ Architecture
Core Agents
- Question Enhancer: Breaks down complex queries into focused sub-questions
- Web Search Agent: Performs targeted searches using Tavily API
- LLM Processor: Handles text processing, summarization, and analysis
- Citation Formatter: Manages academic citation formatting (APA style)
- Code Generator: Creates contextually-aware Python code
- Code Runner: Executes code in secure Modal sandboxes
- Orchestrator: Coordinates the complete workflow
Workflow Example
User Query: "Create Python code to analyze Twitter sentiment"
β
Question Enhancement: Split into focused sub-questions
β
Web Research: Search for Twitter APIs, sentiment libraries, examples
β
Context Integration: Combine research into comprehensive context
β
Code Generation: Create executable Python script
β
Secure Execution: Run code in Modal sandbox
β
Results: Code + output + research summary + citations
π οΈ Setup Requirements
Required API Keys
- LLM Provider (choose one):
- Nebius API (recommended)
- OpenAI API
- Anthropic API
- HuggingFace Inference API
- Tavily API (for web search)
- Modal Account (for code execution)
Environment Configuration
Set these environment variables or configure in the app:
LLM_PROVIDER=nebius # Your chosen provider
NEBIUS_API_KEY=your_key_here
TAVILY_API_KEY=your_key_here
# Modal setup handled automatically
π― Use Cases
Research & Development
- Academic Research: Automated literature review and citation management
- Technical Documentation: Generate comprehensive guides with current information
- Market Analysis: Research trends and generate analytical reports
Code Generation
- Prototype Development: Rapidly create functional code based on requirements
- API Integration: Generate code for working with various APIs and services
- Data Analysis: Create scripts for data processing and visualization
Learning & Education
- Code Examples: Generate educational code samples with explanations
- Concept Exploration: Research and understand complex programming concepts
- Best Practices: Learn current industry standards and methodologies
π§ Advanced Features
Performance Monitoring
- Real-time metrics collection
- Response time tracking
- Success rate monitoring
- Resource usage analytics
Intelligent Caching
- Reduces redundant API calls
- Improves response times
- Configurable TTL settings
Fault Tolerance
- Circuit breaker protection
- Rate limiting management
- Graceful error handling
- Automatic retry mechanisms
Sandbox Pool Management
- Pre-warmed execution environments
- Optimized performance
- Resource pooling
- Automatic scaling
π± Interface Tabs
- Orchestrator Flow: Complete end-to-end workflow
- Individual Agents: Access each agent separately for specific tasks
- Advanced Features: System monitoring and performance analytics
π€ MCP Integration
This application demonstrates advanced MCP (Model Context Protocol) implementation:
- Server Architecture: Full MCP server with schema generation
- Function Registry: Proper MCP function definitions with typing
- Multi-Agent Communication: Structured data flow between agents
- Error Handling: Robust error management across agent interactions
π Performance
- Response Times: Optimized for sub-second agent responses
- Scalability: Handles concurrent requests efficiently
- Reliability: Built-in fault tolerance and monitoring
- Resource Management: Intelligent caching and pooling
π Technical Details
- Python: 3.12+ required
- Framework: Gradio with MCP server capabilities
- Execution: Modal for secure sandboxed code execution
- Search: Tavily API for real-time web research
- Monitoring: Comprehensive performance and health tracking
Ready to experience the future of AI-assisted research and development?
Start by configuring your API keys and dive into the world of multi-agent AI collaboration! π