|
# API Reference |
|
|
|
This document describes the public Python modules and functions available in AnyCoder. |
|
|
|
--- |
|
|
|
## `models.py` |
|
|
|
### `ModelInfo` dataclass |
|
|
|
```python |
|
@dataclass |
|
class ModelInfo: |
|
name: str |
|
id: str |
|
description: str |
|
default_provider: str = "auto" |
|
``` |
|
|
|
### `AVAILABLE_MODELS: List[ModelInfo]` |
|
|
|
A list of supported models with metadata. |
|
|
|
### `find_model(identifier: str) -> Optional[ModelInfo]` |
|
|
|
Lookup a model by name or ID. Returns a `ModelInfo` or `None`. |
|
|
|
--- |
|
|
|
## `inference.py` |
|
|
|
### `chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> str` |
|
|
|
Send a one-shot chat completion request. Returns the assistant response as a string. |
|
|
|
### `stream_chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> Generator[str]` |
|
|
|
Stream partial generation results, yielding content chunks. |
|
|
|
--- |
|
|
|
## `hf_client.py` |
|
|
|
### `get_inference_client(model_id: str, provider: str="auto") -> InferenceClient` |
|
|
|
Create and return a configured `InferenceClient`, routing to Groq, OpenAI, Gemini, Fireworks, or HF as needed. |
|
|
|
--- |
|
|
|
## `deploy.py` |
|
|
|
### `send_to_sandbox(code: str) -> str` |
|
|
|
Wrap HTML code in a sandboxed iframe via a data URI for live preview. |
|
|
|
### `load_project_from_url(url: str) -> Tuple[str, str]` |
|
|
|
Import a Hugging Face Space by URL, returning status message and code content. |
|
|
|
--- |
|
|
|
## `plugins.py` |
|
|
|
### `PluginManager` |
|
|
|
* `discover()`: auto-discovers plugins in the `plugins/` namespace. |
|
* `list_plugins() -> List[str]`: return registered plugin names. |
|
* `run_plugin(name: str, payload: Dict) -> Any`: execute a plugin action. |
|
|