Spaces:
Sleeping
Sleeping
configuration README.md
Browse files- README.md +1 -1
- configuration.md +100 -0
README.md
CHANGED
@@ -80,7 +80,7 @@ ollama list
|
|
80 |
|
81 |
You should see both `llama3.2` and `mxbai-embed-large` in the list of available models.
|
82 |
|
83 |
-
Note: While Ollama is the default choice for easy setup, KnowLang supports other LLM providers through configuration.
|
84 |
|
85 |
## Quick Start
|
86 |
|
|
|
80 |
|
81 |
You should see both `llama3.2` and `mxbai-embed-large` in the list of available models.
|
82 |
|
83 |
+
Note: While Ollama is the default choice for easy setup, KnowLang supports other LLM providers through configuration. See our [Configuration Guide](configuration.md) for using alternative providers like OpenAI or Anthropic.
|
84 |
|
85 |
## Quick Start
|
86 |
|
configuration.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Configuration Guide
|
2 |
+
|
3 |
+
KnowLang uses [pydantic-settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) for configuration management. Settings can be provided through environment variables, `.env` files, or programmatically.
|
4 |
+
|
5 |
+
## Quick Start
|
6 |
+
|
7 |
+
1. Copy the example configuration:
|
8 |
+
```bash
|
9 |
+
cp .env.example .env
|
10 |
+
```
|
11 |
+
|
12 |
+
2. Modify settings as needed in `.env`
|
13 |
+
|
14 |
+
## Core Settings
|
15 |
+
|
16 |
+
### LLM Settings
|
17 |
+
```env
|
18 |
+
# Default is Ollama with llama3.2
|
19 |
+
LLM__MODEL_NAME=llama3.2
|
20 |
+
LLM__MODEL_PROVIDER=ollama
|
21 |
+
LLM__API_KEY=your_api_key # Required for providers like OpenAI
|
22 |
+
```
|
23 |
+
|
24 |
+
Supported providers:
|
25 |
+
- `ollama`: Local models through Ollama
|
26 |
+
- `openai`: OpenAI models (requires API key)
|
27 |
+
- `anthropic`: Anthropic models (requires API key)
|
28 |
+
|
29 |
+
### Embedding Settings
|
30 |
+
```env
|
31 |
+
# Default is Ollama with mxbai-embed-large
|
32 |
+
EMBEDDING__MODEL_NAME=mxbai-embed-large
|
33 |
+
EMBEDDING__MODEL_PROVIDER=ollama
|
34 |
+
EMBEDDING__API_KEY=your_api_key # Required for providers like OpenAI
|
35 |
+
```
|
36 |
+
|
37 |
+
### Database Settings
|
38 |
+
```env
|
39 |
+
# ChromaDB configuration
|
40 |
+
DB__PERSIST_DIRECTORY=./chromadb/mycode
|
41 |
+
DB__COLLECTION_NAME=code
|
42 |
+
DB__CODEBASE_DIRECTORY=./
|
43 |
+
```
|
44 |
+
|
45 |
+
### Parser Settings
|
46 |
+
```env
|
47 |
+
# Language support and file patterns
|
48 |
+
PARSER__LANGUAGES='{"python": {"enabled": true, "file_extensions": [".py"]}}'
|
49 |
+
PARSER__PATH_PATTERNS='{"include": ["**/*"], "exclude": ["**/venv/**", "**/.git/**"]}'
|
50 |
+
```
|
51 |
+
|
52 |
+
### Chat Interface Settings
|
53 |
+
```env
|
54 |
+
CHAT__MAX_CONTEXT_CHUNKS=5
|
55 |
+
CHAT__SIMILARITY_THRESHOLD=0.7
|
56 |
+
CHAT__INTERFACE_TITLE='Code Repository Q&A Assistant'
|
57 |
+
```
|
58 |
+
|
59 |
+
## Advanced Configuration
|
60 |
+
|
61 |
+
### Using Multiple Models
|
62 |
+
|
63 |
+
You can configure different models for different purposes:
|
64 |
+
```env
|
65 |
+
# Main LLM for responses
|
66 |
+
LLM__MODEL_NAME=llama3.2
|
67 |
+
LLM__MODEL_PROVIDER=ollama
|
68 |
+
|
69 |
+
# Evaluation model
|
70 |
+
EVALUATOR__MODEL_NAME=gpt-4
|
71 |
+
EVALUATOR__MODEL_PROVIDER=openai
|
72 |
+
|
73 |
+
# Embedding model
|
74 |
+
EMBEDDING__MODEL_NAME=mxbai-embed-large
|
75 |
+
EMBEDDING__MODEL_PROVIDER=ollama
|
76 |
+
```
|
77 |
+
|
78 |
+
### Reranker Configuration
|
79 |
+
```env
|
80 |
+
RERANKER__ENABLED=true
|
81 |
+
RERANKER__MODEL_NAME=rerank-2
|
82 |
+
RERANKER__MODEL_PROVIDER=voyage
|
83 |
+
RERANKER__TOP_K=4
|
84 |
+
```
|
85 |
+
|
86 |
+
### Analytics Integration
|
87 |
+
```env
|
88 |
+
CHAT_ANALYTICS__ENABLED=true
|
89 |
+
CHAT_ANALYTICS__PROVIDER=mixpanel
|
90 |
+
CHAT_ANALYTICS__API_KEY=your_api_key
|
91 |
+
```
|
92 |
+
|
93 |
+
|
94 |
+
## Further Reading
|
95 |
+
|
96 |
+
- For detailed settings configuration options, see [pydantic-settings documentation](https://docs.pydantic.dev/latest/concepts/pydantic_settings/)
|
97 |
+
- For model-specific configuration, see provider documentation:
|
98 |
+
- [Ollama Models](https://ollama.ai/library)
|
99 |
+
- [OpenAI Models](https://platform.openai.com/docs/models)
|
100 |
+
- [Anthropic Models](https://www.anthropic.com/models)
|