pstan commited on
Commit
393063e
·
verified ·
1 Parent(s): 4d6e469

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -7,4 +7,30 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ # EmbeddedLLM
11
+
12
+ **About EmbeddedLLM**
13
+
14
+ EmbeddedLLM is an open-source company dedicated to advancing the field of Large Language Models (LLMs) through innovative backend solutions and hardware optimizations. Our mission is to make powerful generative models work on all platforms, from edge to private cloud, ensuring accessibility and efficiency for a wide range of applications.
15
+
16
+ **Highlighted Repositories**
17
+
18
+ 1. **[EmbeddedLLM/JamAIBase](https://github.com/EmbeddedLLM/JamAIBase)**
19
+ - **Description**: JamAI Base is an open-source RAG (Retrieval-Augmented Generation) backend platform that integrates an embedded database (SQLite) and an embedded vector database (LanceDB) with managed memory and RAG capabilities. It features built-in LLM, vector embeddings, and reranker orchestration and management, all accessible through a convenient, intuitive, spreadsheet-like UI and a simple REST API.
20
+ - **Key Features**:
21
+ - Embedded database (SQLite) and vector database (LanceDB)
22
+ - Managed memory and RAG capabilities
23
+ - Built-in LLM, vector embeddings, and reranker orchestration
24
+ - Intuitive spreadsheet-like UI
25
+ - Simple REST API
26
+
27
+ 2. **[EmbeddedLLM/vllm-rocm](https://github.com/EmbeddedLLM/vllm-rocm)**
28
+ - **Description**: This repository is a port of vLLM for AMD GPUs, providing a high-throughput and memory-efficient inference and serving engine for LLMs optimized for ROCm.
29
+ - **Key Features**:
30
+ - Vision Language Models support
31
+ - New features not yet available in the upstream
32
+ - Optimized for AMD GPUs with ROCm support
33
+
34
+ **Join Us**
35
+
36
+ We invite you to explore our repositories and models, contribute to our projects, and join us in pushing the boundaries of what's possible with LLMs.