Spaces:
Running
Running
Update model details
Browse files
README.md
CHANGED
@@ -36,16 +36,21 @@ Clicking on the button will download the file.
|
|
36 |
|
37 |
# Summary of the LLMs
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
-
| :-------- | :------- |:----------------------------------------------------------------------------| :------- |
|
43 |
-
| Mistral 7B Instruct v0.2 | Hugging Face (`hf`) | Optional but encouraged; [get here](https://huggingface.co/settings/tokens) | Faster, shorter content |
|
44 |
-
| Mistral Nemo Instruct 2407 | Hugging Face (`hf`) | Optional but encouraged; [get here](https://huggingface.co/settings/tokens) | Slower, longer content |
|
45 |
-
| Gemini 1.5 Flash | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Faster, longer content |
|
46 |
-
| Command R+ | Cohere (`co`) | Mandatory; [get here](https://dashboard.cohere.com/api-keys) | Shorter, simpler content |
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
In addition, offline LLMs provided by Ollama can be used. Read below to know more.
|
51 |
|
|
|
36 |
|
37 |
# Summary of the LLMs
|
38 |
|
39 |
+
SlideDeck AI allows the use of different LLMs from three online providers. These LLMs offer different styles of content generation. Based on several experiments, SlideDeck AI generally recommends the use of Mistral NeMo and Gemini Flash to generate the slide decks.
|
40 |
|
41 |
+
Use one of the following LLMs along with relevant API keys/access tokens, as appropriate, to create the content of the slide deck:
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
+
| LLM | Provider (code) | Requires API key | Characteristics |
|
44 |
+
|:---------------------------------| :------- |:----------------------------------------------------------------------------|:-------------------------|
|
45 |
+
| Mistral 7B Instruct v0.2 | Hugging Face (`hf`) | Optional but encouraged; [get here](https://huggingface.co/settings/tokens) | Faster, shorter content |
|
46 |
+
| Mistral NeMo Instruct 2407 | Hugging Face (`hf`) | Optional but encouraged; [get here](https://huggingface.co/settings/tokens) | Slower, longer content |
|
47 |
+
| Gemini 1.5 Flash | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Faster, longer content |
|
48 |
+
| Gemini 2.0 Flash | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Faster, longer content |
|
49 |
+
| Command R+ | Cohere (`co`) | Mandatory; [get here](https://dashboard.cohere.com/api-keys) | Shorter, simpler content |
|
50 |
+
| Llama 3.3 70B Instruct Turbo | Together AI (`to`) | Mandatory; [get here](https://api.together.ai/settings/api-keys) | Detailed, slower |
|
51 |
+
| Llama 3.1 8B Instruct Turbo 128K | Together AI (`to`) | Mandatory; [get here](https://api.together.ai/settings/api-keys) | Shorter |
|
52 |
+
|
53 |
+
The Mistral models (via Hugging Face) do not mandatorily require an access token. However, you are encouraged to get and use your own Hugging Face access token.
|
54 |
|
55 |
In addition, offline LLMs provided by Ollama can be used. Read below to know more.
|
56 |
|