A newer version of the Gradio SDK is available:
5.42.0
docs/API_REFERENCE.md
# API Reference
This document describes the public Python modules and functions available in AnyCoder.
## `models.py`
### `ModelInfo` dataclass
```python
@dataclass
class ModelInfo:
name: str
id: str
description: str
default_provider: str = "auto"
AVAILABLE_MODELS: List[ModelInfo]
A list of supported models with metadata.
find_model(identifier: str) -> Optional[ModelInfo]
Lookup a model by name or ID.
inference.py
chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> str
Send a one-shot chat completion request.
stream_chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> Generator[str]
Stream partial generation results.
hf_client.py
get_inference_client(model_id: str, provider: str="auto") -> InferenceClient
Creates an HF InferenceClient with provider routing logic.
docs/ARCHITECTURE.md
# Architecture Overview
Below is a high-level diagram of AnyCoder's components and data flow:
+------------+
| User |
+-----+------+
|
v
+---------+----------+
| Gradio UI (app.py)|
+---------+----------+
|
+------------------------+------------------------+
| | |
v v v
models.py inference.py plugins.py
(model registry) (routing & chat_completion) (extension points) | | | +---------------------+ +------------------------+ | v hf_client.py deploy.py (HF/OpenAI/Gemini/etc routing) (HF Spaces integration)
- **UI Layer** (`app.py` + Gradio): handles inputs, outputs, and state.
- **Model Registry** (`models.py`): metadata-driven list of supported models.
- **Inference Layer** (`inference.py`, `hf_client.py`): abstracts provider selection and API calls.
- **Extensions** (`plugins.py`): plugin architecture for community or custom integrations.
- **Deployment** (`deploy.py`): Helpers to preview in an iframe or push to Hugging Face Spaces.
This separation ensures modularity, testability, and easy extensibility.