doberst commited on
Commit
48c44b2
·
verified ·
1 Parent(s): ecf0065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -13
README.md CHANGED
@@ -10,29 +10,21 @@ pinned: false
10
  Welcome to the llmware HuggingFace page. We believe that the ascendence of LLMs creates a major new application pattern and data
11
  pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries. Our open source research efforts
12
  are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality
13
- automation-focused enterprise Agent, RAG and embedding models.
14
 
15
  Our model training initiatives fall into four major categories:
16
 
17
- **SLIMs** - small, specialized function calling models for stacking in multi-model, Agent-based workflows
18
-
19
  **BLING/DRAGON** - highly-accurate fact-based question-answering models
20
-
 
21
  **Industry-BERT** - industry fine-tuned embedding models
22
-
23
  **Private Inference** - Self-Hosting, Packaging and Quantization - GGUF, ONNX, OpenVino
24
 
25
-
26
  Please check out a few of our recent blog postings related to these initiatives:
27
- [SMALL MODEL ACCURACY BENCHMARK](https://medium.com/@darrenoberst/best-small-language-models-for-accuracy-and-enterprise-use-cases-benchmark-results-cf71964759c8) |
28
- [OUR JOURNEY BUILDING ACCURATE ENTERPRISE SMALL MODELS](https://medium.com/@darrenoberst/building-the-most-accurate-small-language-models-our-journey-781474f64d88) |
29
  [THINKING DOES NOT HAPPEN ONE TOKEN AT A TIME](https://medium.com/@darrenoberst/thinking-does-not-happen-one-token-at-a-time-0dd0c6a528ec) |
30
- [SLIMs](https://medium.com/@darrenoberst/slims-small-specialized-models-function-calling-and-multi-model-agents-8c935b341398) |
31
- [BLING](https://medium.com/@darrenoberst/small-instruct-following-llms-for-rag-use-case-54c55e4b41a8) |
32
  [RAG-INSTRUCT-TEST-DATASET](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) |
33
  [LLMWARE EMERGING STACK](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) |
34
- [MODEL SIZE TRENDS](https://medium.com/@darrenoberst/are-the-mega-llms-driving-the-future-or-they-already-in-the-past-c3b949f9f5a5) |
35
- [OPEN SOURCE RAG](https://medium.com/@darrenoberst/open-source-llms-in-rag-89d397b39511)
36
- [1B-3B-7B LLM CAPABILITIES](https://medium.com/@darrenoberst/rag-instruct-capabilities-they-grow-up-so-fast-2647550cdc0a)
37
 
38
  Interested? [Join us on Discord](https://discord.gg/MhZn5Nc39h)
 
10
  Welcome to the llmware HuggingFace page. We believe that the ascendence of LLMs creates a major new application pattern and data
11
  pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries. Our open source research efforts
12
  are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality
13
+ automation-focused enterprise Agent, RAG and embedding small specialized language models.
14
 
15
  Our model training initiatives fall into four major categories:
16
 
17
+ **SLIMs** - small, specialized function calling models for stacking in multi-model, Agent-based workflows -- [SLIMs](https://medium.com/@darrenoberst/slims-small-specialized-models-function-calling-and-multi-model-agents-8c935b341398)
 
18
  **BLING/DRAGON** - highly-accurate fact-based question-answering models
19
+ -- [SMALL MODEL ACCURACY BENCHMARK](https://medium.com/@darrenoberst/best-small-language-models-for-accuracy-and-enterprise-use-cases-benchmark-results-cf71964759c8) |
20
+ -- [OUR JOURNEY BUILDING ACCURATE ENTERPRISE SMALL MODELS](https://medium.com/@darrenoberst/building-the-most-accurate-small-language-models-our-journey-781474f64d88)
21
  **Industry-BERT** - industry fine-tuned embedding models
 
22
  **Private Inference** - Self-Hosting, Packaging and Quantization - GGUF, ONNX, OpenVino
23
 
 
24
  Please check out a few of our recent blog postings related to these initiatives:
 
 
25
  [THINKING DOES NOT HAPPEN ONE TOKEN AT A TIME](https://medium.com/@darrenoberst/thinking-does-not-happen-one-token-at-a-time-0dd0c6a528ec) |
 
 
26
  [RAG-INSTRUCT-TEST-DATASET](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) |
27
  [LLMWARE EMERGING STACK](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) |
28
+ [BECOMING A MASTER FINETUNING CHEF](https://medium.com/@darrenoberst/6-tips-to-becoming-a-master-llm-fine-tuning-chef-143ad735354b)
 
 
29
 
30
  Interested? [Join us on Discord](https://discord.gg/MhZn5Nc39h)