File size: 1,586 Bytes
fb36540
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b987e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
<!-- docs/USER_GUIDE.md -->

# User Guide

## Sidebar

- **Model**: Select among HF, OpenAI, Gemini, Groq, and Fireworks models.  
- **Input**: Describe your app or paste code/text.  
- **Generate**: Click to invoke the AI pipeline.

## Tabs

- **Code**: View generated code (editable).  
- **Preview**: Live HTML preview (for web outputs).  
- **History**: Conversation log with assistant.

## Files & Plugins

- Upload reference files (PDF, DOCX, images) for extraction.  
- Use **Plugins** to integrate GitHub, Slack, DB queries, etc.  

---

```markdown
<!-- docs/API_REFERENCE.md -->

# API Reference

## `models.py`

### `ModelInfo`
- `name: str`  
- `id: str`  
- `description: str`  
- `default_provider: str`

### `find_model(identifier: str) -> Optional[ModelInfo]`

## `inference.py`

### `chat_completion(model_id, messages, provider=None, max_tokens=4096) -> str`

### `stream_chat_completion(model_id, messages, provider=None, max_tokens=4096) -> Generator[str]`

---

```markdown
<!-- docs/ARCHITECTURE.md -->

# Architecture

user
└─> Gradio UI ──> app.py
β”œβ”€> models.py (registry)
β”œβ”€> inference.py (routing)
β”œβ”€> hf_client.py (clients)
β”œβ”€> plugins.py (extension)
└─> deploy.py (HF Spaces)

markdown
Copy
Edit

- **Data flow**: UI β†’ `generation_code` β†’ `inference.chat_completion` β†’ HF/OpenAI/Gemini/Groq β†’ UI  
- **Extensibility**: Add new models in `models.py`; add providers in `hf_client.py`; add integrations via `plugins/`  

---

That covers all test suites, CI config, and core docs. Let me know if you’d like any adjustments!