Spaces:
Sleeping
Sleeping
ollama prereq README update
Browse files
README.md
CHANGED
@@ -54,8 +54,47 @@ flowchart LR
|
|
54 |
```
|
55 |
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
## Quick Start
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
### Installation
|
60 |
|
61 |
```bash
|
|
|
54 |
```
|
55 |
|
56 |
|
57 |
+
## Prerequisites
|
58 |
+
|
59 |
+
KnowLang uses [Ollama](https://ollama.com) as its default LLM and embedding provider. Before installing KnowLang:
|
60 |
+
|
61 |
+
1. Install Ollama:
|
62 |
+
```bash
|
63 |
+
# check the official download instructions from https://ollama.com/download
|
64 |
+
curl -fsSL https://ollama.com/install.sh | sh
|
65 |
+
```
|
66 |
+
|
67 |
+
2. Pull required models:
|
68 |
+
```bash
|
69 |
+
# For LLM responses
|
70 |
+
ollama pull llama3.2
|
71 |
+
|
72 |
+
# For code embeddings
|
73 |
+
ollama pull mxbai-embed-large
|
74 |
+
```
|
75 |
+
|
76 |
+
3. Verify Ollama is running:
|
77 |
+
```bash
|
78 |
+
ollama list
|
79 |
+
```
|
80 |
+
|
81 |
+
You should see both `llama3.2` and `mxbai-embed-large` in the list of available models.
|
82 |
+
|
83 |
+
Note: While Ollama is the default choice for easy setup, KnowLang supports other LLM providers through configuration.
|
84 |
+
|
85 |
## Quick Start
|
86 |
|
87 |
+
### System Requirements
|
88 |
+
|
89 |
+
- **RAM**: Minimum 16GB recommended (Ollama models require significant memory)
|
90 |
+
- **Storage**: At least 10GB free space for model files
|
91 |
+
- **OS**:
|
92 |
+
- Linux (recommended)
|
93 |
+
- macOS 12+ (Intel or Apple Silicon)
|
94 |
+
- Windows 10+ with WSL2
|
95 |
+
- **Python**: 3.10 or higher
|
96 |
+
|
97 |
+
|
98 |
### Installation
|
99 |
|
100 |
```bash
|