modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 18:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
osmanperviz/results | osmanperviz | 2024-09-16T14:32:12Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-09-16T14:31:40Z | ---
library_name: transformers
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8427 | 0.0719 | 10 | 1.2732 |
| 1.161 | 0.1439 | 20 | 0.8768 |
| 1.0161 | 0.2158 | 30 | 0.8193 |
| 0.9054 | 0.2878 | 40 | 0.7536 |
| 0.8195 | 0.3597 | 50 | 0.7090 |
| 0.9785 | 0.4317 | 60 | 0.6815 |
| 0.8085 | 0.5036 | 70 | 0.6641 |
| 0.8913 | 0.5755 | 80 | 0.6521 |
| 0.6526 | 0.6475 | 90 | 0.6326 |
| 0.6447 | 0.7194 | 100 | 0.6177 |
| 0.6836 | 0.7914 | 110 | 0.6067 |
| 0.7824 | 0.8633 | 120 | 0.5950 |
| 0.5367 | 0.9353 | 130 | 0.5840 |
| 0.7856 | 1.0072 | 140 | 0.5763 |
| 0.6118 | 1.0791 | 150 | 0.5712 |
| 0.7442 | 1.1511 | 160 | 0.5639 |
| 0.5305 | 1.2230 | 170 | 0.5587 |
| 0.6777 | 1.2950 | 180 | 0.5507 |
| 0.5568 | 1.3669 | 190 | 0.5469 |
| 0.7629 | 1.4388 | 200 | 0.5446 |
| 0.5966 | 1.5108 | 210 | 0.5385 |
| 0.7105 | 1.5827 | 220 | 0.5335 |
| 0.7294 | 1.6547 | 230 | 0.5281 |
| 0.6798 | 1.7266 | 240 | 0.5233 |
| 0.55 | 1.7986 | 250 | 0.5189 |
| 0.6927 | 1.8705 | 260 | 0.5155 |
| 0.4219 | 1.9424 | 270 | 0.5123 |
| 0.6465 | 2.0144 | 280 | 0.5106 |
| 0.5013 | 2.0863 | 290 | 0.5093 |
| 0.6224 | 2.1583 | 300 | 0.5056 |
| 0.5155 | 2.2302 | 310 | 0.5040 |
| 0.4718 | 2.3022 | 320 | 0.5006 |
| 0.7149 | 2.3741 | 330 | 0.4984 |
| 0.5848 | 2.4460 | 340 | 0.4974 |
| 0.6697 | 2.5180 | 350 | 0.4959 |
| 0.438 | 2.5899 | 360 | 0.4958 |
| 0.7051 | 2.6619 | 370 | 0.4952 |
| 0.5555 | 2.7338 | 380 | 0.4941 |
| 0.6686 | 2.8058 | 390 | 0.4934 |
| 0.6081 | 2.8777 | 400 | 0.4928 |
| 0.5672 | 2.9496 | 410 | 0.4924 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
leafspark/Llama-3.1-8B-MultiReflection-Instruct | leafspark | 2024-09-16T14:29:34Z | 5,094 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"reflection",
"unsloth",
"peft",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"dataset:leafspark/DetailedReflection-Claude-v3_5-Sonnet",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-15T01:14:53Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
library_name: transformers
tags:
- reflection
- unsloth
- peft
- llama
base_model:
- meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- leafspark/DetailedReflection-Claude-v3_5-Sonnet
metrics:
- accuracy
pipeline_tag: text-generation
model-index:
- name: Llama-3.1-8B-MultiReflection-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.25
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.54
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.7
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.52
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.27
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=leafspark/Llama-3.1-8B-MultiReflection-Instruct
name: Open LLM Leaderboard
---
# Llama-3.1-8B-MultiReflection-Instruct
This model is inspired by OpenAI's o1 reasoning model. The dataset was synthetically generated using Claude 3.5 Sonnet.
### Training

The training was done on Google Colab's free T4, using unsloth (duration: 52.32 minutes). The configuration is as follows:
- LoRA Rank: 128
- Packing: enabled
- Batch size: 2
- Gradient accumulation steps: 4
- Epoches: 3
- Steps: 30
- Max sequence length: 4096
The training data comprised of 81 examples, each approximatly 3000 tokens.
### Notes
- It tends to produce very verbose and long reasoning responses
- The coherency seems fairly good if you used the recommended sampling parameters
- You should use at least 16k context, as each response is usually 2000-3000 tokens long
**Recommended Sampling Parameters:**
- Temperature: `0.15`
- Min-P: `0.2`
- Top-K: `50`
- Top-P: `1`
- Max Tokens: `uncapped`
- Frequency Penalty: `0.5`
- Presence Penalty: `0.1`
**Recommended System Prompt:**
```
You are an advanced agentic assistant designed to complete tasks and provide general insight. Your responses will be formatted in XML. Start your responses with a <thinking> XML tag. You should write general process of how to complete the task provided by the user in this area. You may add nested tags, such as multiple <thinking> tags. <reflection> tags may also be used, if you've detected any inconsistencies or errors in your thinking. After completing a detailed thinking process, generate a draft response in <draft> tags. Review this response in <reflection> tags again, and refine your response as needed. Remember that you may (and are encouraged to) include multiple thinking, reflection and drafts. Provide your final response in <output> tags.
```
I initially thought the model was going to be medicore, but here's a sample output (quantized Q6_K, the fp16 is probably even better):
```
Can you teach me a cool math proof?
```
```xml
<thinking>
To create an engaging math proof, I should:
1. Choose a well-known and interesting proof
2. Break it down into simple steps
3. Ask the user questions to keep them engaged
4. Use LaTeX for math symbols
5. Provide explanations and context for each step
</thinking>
<draft>
Let's explore the fascinating world of math proofs together. I'd like to introduce you to the "Proof of the Pythagorean Theorem" - a classic and beautiful result in geometry.
Here's the theorem we'll be proving:
$$a^2 + b^2 = c^2$$
where $a$ and $b$ are the lengths of the legs of a right triangle, and $c$ is the length of the hypotenuse.
To make this proof more engaging, I'll ask you questions along the way. Are you ready to get started?
1. Can you draw a simple right triangle with legs of length $a$ and $b$, and a hypotenuse of length $c$?
</draft>
<reflection>
The draft is a good start, but I should:
1. Provide more context about the proof
2. Explain why this theorem is important
3. Ask more questions to keep the user engaged
4. Use LaTeX for math symbols
</reflection>
<thinking>
Let me revise the draft to address these points:
1. I'll explain why the Pythagorean Theorem is important
2. I'll ask more questions to keep the user engaged
3. I'll use LaTeX for math symbols
4. I'll provide more context about the proof
</thinking>
<draft>
Let's embark on an exciting journey through the world of math proofs. Today, we'll explore the "Proof of the Pythagorean Theorem" - a fundamental result in geometry that has far-reaching implications in various fields, including physics, engineering, and architecture.
Here's the theorem we'll be proving:
$$a^2 + b^2 = c^2$$
where $a$ and $b$ are the lengths of the legs of a right triangle, and $c$ is the length of the hypotenuse.
This theorem is crucial because it allows us to calculate the length of the hypotenuse (the longest side) of a right triangle, given the lengths of the other two sides. It's a powerful tool in many real-world applications, such as building design, bridge construction, and even computer graphics.
To make this proof more engaging, I'll ask you questions along the way. Are you ready to get started?
1. Can you draw a simple right triangle with legs of length $a$ and $b$, and a hypotenuse of length $c$?
2. What do you think might be a good way to approach this proof? Do you have any ideas?
</draft>
<reflection>
The revised draft is much better, but I should:
1. Provide more details about the proof
2. Explain the concept of similar triangles
3. Use more LaTeX for math symbols
4. Ask more questions to keep the user engaged
</reflection>
<thinking>
Let me revise the draft again to address these points:
1. I'll provide more details about the proof
2. I'll explain the concept of similar triangles
3. I'll use more LaTeX for math symbols
4. I'll ask more questions to keep the user engaged
</thinking>
<draft>
Let's dive into the fascinating world of math proofs. Today, we'll explore the "Proof of the Pythagorean Theorem" - a fundamental result in geometry that has far-reaching implications in various fields, including physics, engineering, and architecture.
Here's the theorem we'll be proving:
$$a^2 + b^2 = c^2$$
where $a$ and $b$ are the lengths of the legs of a right triangle, and $c$ is the length of the hypotenuse.
This theorem is crucial because it allows us to calculate the length of the hypotenuse (the longest side) of a right triangle, given the lengths of the other two sides. It's a powerful tool in many real-world applications, such as building design, bridge construction, and even computer graphics.
To make this proof more engaging, I'll ask you questions along the way. Are you ready to get started?
1. Can you draw a simple right triangle with legs of length $a$ and $b$, and a hypotenuse of length $c$?
2. What do you think might be a good way to approach this proof? Do you have any ideas?
Let's start by considering two similar triangles. Similar triangles are triangles that have the same shape, but not necessarily the same size. In this case, we'll use the concept of similar triangles to relate the lengths of the sides of the right triangle.
Here's a diagram of the two similar triangles:
$$\begin{array}{ccc}
\text{Triangle } ABC & \text{Triangle } ADE \\
a & \frac{a}{2} \\
b & \frac{b}{2} \\
c & \frac{c}{2} \\
\end{array}$$
Notice that the two triangles are similar because they have the same angles (all right angles in this case). This means that their corresponding sides are proportional.
Now, let's use this similarity to relate the lengths of the sides of the two triangles. We can set up a proportion based on the corresponding sides:
$$\frac{a}{\frac{a}{2}} = \frac{b}{\frac{b}{2}} = \frac{c}{\frac{c}{2}}$$
Simplifying this proportion, we get:
$$2a = \frac{a}{2}$$
$$2b = \frac{b}{2}$$
$$2c = \frac{c}{2}$$
Now, let's square both sides of each equation:
$$4a^2 = \frac{a^2}{4}$$
$$4b^2 = \frac{b^2}{4}$$
$$4c^2 = \frac{c^2}{4}$$
Multiplying both sides of each equation by 4, we get:
$$16a^2 = a^2$$
$$16b^2 = b^2$$
$$16c^2 = c^2$$
Now, let's add the three equations together:
$$16a^2 + 16b^2 + 16c^2 = a^2 + b^2 + c^2$$
Dividing both sides by 16, we get:
$$a^2 + b^2 + c^2 = \frac{1}{16}(a^2 + b^2 + c^2)$$
This equation is true for any right triangle, so we can conclude that:
$$a^2 + b^2 = c^2$$
This is the Pythagorean Theorem!
Now, it's your turn again. Can you think of any real-world applications of this theorem? Do you have any questions about the proof?
</draft>
<reflection>
The revised draft is excellent. It provides a clear and engaging explanation of the Pythagorean Theorem, along with a step-by-step proof. The use of LaTeX for math symbols is also well done. I'm confident that this will be a helpful and enjoyable experience for the user.
</reflection>
<output>
Let's embark on an exciting journey through the world of math proofs. Today, we'll explore the "Proof of the Pythagorean Theorem" - a fundamental result in geometry that has far-reaching implications in various fields, including physics, engineering, and architecture.
Here's the theorem we'll be proving:
$$a^2 + b^2 = c^2$$
where $a$ and $b$ are the lengths of the legs of a right triangle, and $c$ is the length of the hypotenuse.
This theorem is crucial because it allows us to calculate the length of the hypotenuse (the longest side) of a right triangle, given the lengths of the other two sides. It's a powerful tool in many real-world applications, such as building design, bridge construction, and even computer graphics.
To make this proof more engaging, I'll ask you questions along the way. Are you ready to get started?
1. Can you draw a simple right triangle with legs of length $a$ and $b$, and a hypotenuse of length $c$?
2. What do you think might be a good way to approach this proof? Do you have any ideas?
Let's start by considering two similar triangles. Similar triangles are triangles that have the same shape, but not necessarily the same size. In this case, we'll use the concept of similar triangles to relate the lengths of the sides of the right triangle.
Here's a diagram of the two similar triangles:
$$\begin{array}{ccc}
\text{Triangle } ABC & \text{Triangle } ADE \\
a & \frac{a}{2} \\
b & \frac{b}{2} \\
c & \frac{c}{2} \\
\end{array}$$
Notice that the two triangles are similar because they have the same angles (all right angles in this case). This means that their corresponding sides are proportional.
Now, let's use this similarity to relate the lengths of the sides of the two triangles. We can set up a proportion based on the corresponding sides:
$$\frac{a}{\frac{a}{2}} = \frac{b}{\frac{b}{2}} = \frac{c}{\frac{c}{2}}$$
Simplifying this proportion, we get:
$$2a = \frac{a}{2}$$
$$2b = \frac{b}{2}$$
$$2c = \frac{c}{2}$$
Now, let's square both sides of each equation:
$$4a^2 = \frac{a^2}{4}$$
$$4b^2 = \frac{b^2}{4}$$
$$4c^2 = \frac{c^2}{4}$$
Multiplying both sides of each equation by 4, we get:
$$16a^2 = a^2$$
$$16b^2 = b^2$$
$$16c^2 = c^2$$
Now, let's add the three equations together:
$$16a^2 + 16b^2 + 16c^2 = a^2 + b^2 + c^2$$
Dividing both sides by 16, we get:
$$a^2 + b^2 + c^2 = \frac{1}{16}(a^2 + b^2 + c^2)$$
This equation is true for any right triangle, so we can conclude that:
$$a^2 + b^2 = c^2$$
This is the Pythagorean Theorem!
Now, it's your turn again. Can you think of any real-world applications of this theorem? Do you have any questions about the proof?
Remember, this is a proof, not a mathematical exercise. Feel free to ask questions or share your thoughts about the theorem and its implications.
</output>
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_leafspark__Llama-3.1-8B-MultiReflection-Instruct)
| Metric |Value|
|-------------------|----:|
|Avg. |26.12|
|IFEval (0-Shot) |71.25|
|BBH (3-Shot) |28.45|
|MATH Lvl 5 (4-Shot)|12.54|
|GPQA (0-shot) | 5.70|
|MuSR (0-shot) | 8.52|
|MMLU-PRO (5-shot) |30.27|
|
mradermacher/NarraThinker12B-GGUF | mradermacher | 2024-09-16T14:29:18Z | 85 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ClaudioItaly/NarraThinker12B",
"base_model:quantized:ClaudioItaly/NarraThinker12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-15T23:53:45Z | ---
base_model: ClaudioItaly/NarraThinker12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ClaudioItaly/NarraThinker12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-GGUF/resolve/main/NarraThinker12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NarraThinker12B-i1-GGUF | mradermacher | 2024-09-16T14:29:16Z | 13 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ClaudioItaly/NarraThinker12B",
"base_model:quantized:ClaudioItaly/NarraThinker12B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-16T12:36:42Z | ---
base_model: ClaudioItaly/NarraThinker12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ClaudioItaly/NarraThinker12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NarraThinker12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NarraThinker12B-i1-GGUF/resolve/main/NarraThinker12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pasqualedem/label_anything_256_sam_1024_coco | pasqualedem | 2024-09-16T14:22:46Z | 6 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-09-16T14:15:03Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
chano12/llama3_without_tag | chano12 | 2024-09-16T14:16:01Z | 13 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2024-09-16T14:15:47Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
PrunaAI/GalrionSoftworks-Pleiades-12B-v1-4bit-smashed | PrunaAI | 2024-09-16T14:10:52Z | 5 | 0 | null | [
"safetensors",
"mistral",
"pruna-ai",
"base_model:GalrionSoftworks/Pleiades-12B-v1",
"base_model:quantized:GalrionSoftworks/Pleiades-12B-v1",
"4-bit",
"awq",
"region:us"
] | null | 2024-09-13T15:58:58Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: GalrionSoftworks/Pleiades-12B-v1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with a.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo GalrionSoftworks/Pleiades-12B-v1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model = AutoAWQForCausalLM.from_quantized(
"PrunaAI/GalrionSoftworks-Pleiades-12B-v1-4bit-smashed",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto",
fuse_layers=False,
)
tokenizer = AutoTokenizer.from_pretrained("GalrionSoftworks/Pleiades-12B-v1")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = [
{"role": "user", "content": "What is the color of prunes?"},
]
inputs = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
outputs = model.generate(
**inputs,
do_sample=True,
max_new_tokens=256,
streamer=streamer,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
)
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model GalrionSoftworks/Pleiades-12B-v1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
gair-prox/ProX-RedPJ-1.7B-25B | gair-prox | 2024-09-16T14:10:17Z | 5 | 0 | null | [
"pytorch",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-09-16T12:16:55Z | ---
license: apache-2.0
---
|
srikarvar/fine_tuned_model_17 | srikarvar | 2024-09-16T14:07:08Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2752",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-09-16T14:06:46Z | ---
base_model: intfloat/multilingual-e5-small
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2752
- loss:OnlineContrastiveLoss
widget:
- source_sentence: Would you want to be President?
sentences:
- Can you help me with my homework?
- How to bake cookies?
- Why do you want to be to president?
- source_sentence: Velocity of sound waves in the atmosphere
sentences:
- What is the speed of sound in air?
- What is the best/most memorable thing you've ever eaten and why?
- The `safe` option in the `to_spreadsheet` method controls whether a safe conversion
or not is needed for certain plant attributes to store the data in a SpreadsheetTable
or Row.
- source_sentence: Number of countries in the European Union
sentences:
- How many countries are in the European Union?
- Who painted the Sistine Chapel ceiling?
- The RecipeManager class is used to manage the downloading and extraction of recipes.
- source_sentence: Official currency of the USA
sentences:
- What is purpose of life?
- Files inside ZIP archives are accessed and yielded sequentially using iter_zip().
- What is the currency of the United States?
- source_sentence: Who wrote the book "1984"?
sentences:
- What is the speed of light?
- How to set up a home gym?
- Who wrote the book "To Kill a Mockingbird"?
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class dev
type: pair-class-dev
metrics:
- type: cosine_accuracy
value: 0.9456521739130435
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8053532838821411
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9554896142433236
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8053532838821411
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.92
name: Cosine Precision
- type: cosine_recall
value: 0.9938271604938271
name: Cosine Recall
- type: cosine_ap
value: 0.970102365862799
name: Cosine Ap
- type: dot_accuracy
value: 0.9456521739130435
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8053532838821411
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.9554896142433236
name: Dot F1
- type: dot_f1_threshold
value: 0.8053532838821411
name: Dot F1 Threshold
- type: dot_precision
value: 0.92
name: Dot Precision
- type: dot_recall
value: 0.9938271604938271
name: Dot Recall
- type: dot_ap
value: 0.970102365862799
name: Dot Ap
- type: manhattan_accuracy
value: 0.9456521739130435
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.787351608276367
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.9554896142433236
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.787351608276367
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.92
name: Manhattan Precision
- type: manhattan_recall
value: 0.9938271604938271
name: Manhattan Recall
- type: manhattan_ap
value: 0.9698493258522533
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9456521739130435
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.6239285469055176
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.9554896142433236
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6239285469055176
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.92
name: Euclidean Precision
- type: euclidean_recall
value: 0.9938271604938271
name: Euclidean Recall
- type: euclidean_ap
value: 0.970102365862799
name: Euclidean Ap
- type: max_accuracy
value: 0.9456521739130435
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.787351608276367
name: Max Accuracy Threshold
- type: max_f1
value: 0.9554896142433236
name: Max F1
- type: max_f1_threshold
value: 9.787351608276367
name: Max F1 Threshold
- type: max_precision
value: 0.92
name: Max Precision
- type: max_recall
value: 0.9938271604938271
name: Max Recall
- type: max_ap
value: 0.970102365862799
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class test
type: pair-class-test
metrics:
- type: cosine_accuracy
value: 0.9456521739130435
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8053532838821411
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9554896142433236
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8053532838821411
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.92
name: Cosine Precision
- type: cosine_recall
value: 0.9938271604938271
name: Cosine Recall
- type: cosine_ap
value: 0.970102365862799
name: Cosine Ap
- type: dot_accuracy
value: 0.9456521739130435
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8053532838821411
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.9554896142433236
name: Dot F1
- type: dot_f1_threshold
value: 0.8053532838821411
name: Dot F1 Threshold
- type: dot_precision
value: 0.92
name: Dot Precision
- type: dot_recall
value: 0.9938271604938271
name: Dot Recall
- type: dot_ap
value: 0.970102365862799
name: Dot Ap
- type: manhattan_accuracy
value: 0.9456521739130435
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.787351608276367
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.9554896142433236
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.787351608276367
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.92
name: Manhattan Precision
- type: manhattan_recall
value: 0.9938271604938271
name: Manhattan Recall
- type: manhattan_ap
value: 0.9698493258522533
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9456521739130435
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.6239285469055176
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.9554896142433236
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6239285469055176
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.92
name: Euclidean Precision
- type: euclidean_recall
value: 0.9938271604938271
name: Euclidean Recall
- type: euclidean_ap
value: 0.970102365862799
name: Euclidean Ap
- type: max_accuracy
value: 0.9456521739130435
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.787351608276367
name: Max Accuracy Threshold
- type: max_f1
value: 0.9554896142433236
name: Max F1
- type: max_f1_threshold
value: 9.787351608276367
name: Max F1 Threshold
- type: max_precision
value: 0.92
name: Max Precision
- type: max_recall
value: 0.9938271604938271
name: Max Recall
- type: max_ap
value: 0.970102365862799
name: Max Ap
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/fine_tuned_model_17")
# Run inference
sentences = [
'Who wrote the book "1984"?',
'Who wrote the book "To Kill a Mockingbird"?',
'What is the speed of light?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.9457 |
| cosine_accuracy_threshold | 0.8054 |
| cosine_f1 | 0.9555 |
| cosine_f1_threshold | 0.8054 |
| cosine_precision | 0.92 |
| cosine_recall | 0.9938 |
| cosine_ap | 0.9701 |
| dot_accuracy | 0.9457 |
| dot_accuracy_threshold | 0.8054 |
| dot_f1 | 0.9555 |
| dot_f1_threshold | 0.8054 |
| dot_precision | 0.92 |
| dot_recall | 0.9938 |
| dot_ap | 0.9701 |
| manhattan_accuracy | 0.9457 |
| manhattan_accuracy_threshold | 9.7874 |
| manhattan_f1 | 0.9555 |
| manhattan_f1_threshold | 9.7874 |
| manhattan_precision | 0.92 |
| manhattan_recall | 0.9938 |
| manhattan_ap | 0.9698 |
| euclidean_accuracy | 0.9457 |
| euclidean_accuracy_threshold | 0.6239 |
| euclidean_f1 | 0.9555 |
| euclidean_f1_threshold | 0.6239 |
| euclidean_precision | 0.92 |
| euclidean_recall | 0.9938 |
| euclidean_ap | 0.9701 |
| max_accuracy | 0.9457 |
| max_accuracy_threshold | 9.7874 |
| max_f1 | 0.9555 |
| max_f1_threshold | 9.7874 |
| max_precision | 0.92 |
| max_recall | 0.9938 |
| **max_ap** | **0.9701** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.9457 |
| cosine_accuracy_threshold | 0.8054 |
| cosine_f1 | 0.9555 |
| cosine_f1_threshold | 0.8054 |
| cosine_precision | 0.92 |
| cosine_recall | 0.9938 |
| cosine_ap | 0.9701 |
| dot_accuracy | 0.9457 |
| dot_accuracy_threshold | 0.8054 |
| dot_f1 | 0.9555 |
| dot_f1_threshold | 0.8054 |
| dot_precision | 0.92 |
| dot_recall | 0.9938 |
| dot_ap | 0.9701 |
| manhattan_accuracy | 0.9457 |
| manhattan_accuracy_threshold | 9.7874 |
| manhattan_f1 | 0.9555 |
| manhattan_f1_threshold | 9.7874 |
| manhattan_precision | 0.92 |
| manhattan_recall | 0.9938 |
| manhattan_ap | 0.9698 |
| euclidean_accuracy | 0.9457 |
| euclidean_accuracy_threshold | 0.6239 |
| euclidean_f1 | 0.9555 |
| euclidean_f1_threshold | 0.6239 |
| euclidean_precision | 0.92 |
| euclidean_recall | 0.9938 |
| euclidean_ap | 0.9701 |
| max_accuracy | 0.9457 |
| max_accuracy_threshold | 9.7874 |
| max_f1 | 0.9555 |
| max_f1_threshold | 9.7874 |
| max_precision | 0.92 |
| max_recall | 0.9938 |
| **max_ap** | **0.9701** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 2,752 training samples
* Columns: <code>sentence2</code>, <code>label</code>, and <code>sentence1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence2 | label | sentence1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | int | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.14 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>0: ~49.00%</li><li>1: ~51.00%</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.77 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
| sentence2 | label | sentence1 |
|:---------------------------------------------------|:---------------|:--------------------------------------------------|
| <code>What are the ingredients of pizza?</code> | <code>1</code> | <code>What are the ingredients of a pizza?</code> |
| <code>What are the ingredients of a burger?</code> | <code>0</code> | <code>What are the ingredients of a pizza?</code> |
| <code>How is photosynthesis carried out?</code> | <code>1</code> | <code>How does photosynthesis work?</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 276 evaluation samples
* Columns: <code>sentence2</code>, <code>label</code>, and <code>sentence1</code>
* Approximate statistics based on the first 276 samples:
| | sentence2 | label | sentence1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | int | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.34 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>0: ~41.30%</li><li>1: ~58.70%</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.56 tokens</li><li>max: 87 tokens</li></ul> |
* Samples:
| sentence2 | label | sentence1 |
|:---------------------------------------------------------------------------------------------------------------------------|:---------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How is AI used to enhance cybersecurity?</code> | <code>0</code> | <code>What are the challenges of AI in cybersecurity?</code> |
| <code>The SYSTEM log documentation can be accessed by clicking on the link which will take you to the main version.</code> | <code>1</code> | <code>You can find the SYSTEM log documentation on the main version. Click on the provided link to redirect to the main version of the documentation.</code> |
| <code>Name the capital city of Italy</code> | <code>1</code> | <code>What is the capital of Italy?</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:----------:|:-------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.7876 | - |
| 0.2326 | 10 | 1.5405 | - | - | - |
| 0.4651 | 20 | 1.0389 | - | - | - |
| 0.6977 | 30 | 1.2755 | - | - | - |
| 0.9302 | 40 | 0.7024 | - | - | - |
| 1.0 | 43 | - | 0.9673 | 0.9133 | - |
| 1.1512 | 50 | 0.7527 | - | - | - |
| 1.3837 | 60 | 0.6684 | - | - | - |
| 1.6163 | 70 | 0.7612 | - | - | - |
| 1.8488 | 80 | 0.7265 | - | - | - |
| 2.0116 | 87 | - | 0.4647 | 0.9534 | - |
| 2.0698 | 90 | 0.2986 | - | - | - |
| 2.3023 | 100 | 0.1964 | - | - | - |
| 2.5349 | 110 | 0.5834 | - | - | - |
| 2.7674 | 120 | 0.4893 | - | - | - |
| 3.0 | 130 | 0.1254 | 0.3544 | 0.9670 | - |
| 3.2209 | 140 | 0.278 | - | - | - |
| 3.4535 | 150 | 0.1805 | - | - | - |
| 3.6860 | 160 | 0.4525 | - | - | - |
| 3.9186 | 170 | 0.1885 | - | - | - |
| **3.9651** | **172** | **-** | **0.3396** | **0.9701** | **0.9701** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.0
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.34.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Mohsen21/SaadiTTS | Mohsen21 | 2024-09-16T14:05:51Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-09-16T14:04:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gemma-PE/gemma2-9b-finetuned-PromptEngineering_merged | Gemma-PE | 2024-09-16T14:02:58Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T13:58:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Locutusque/TinyMistral-248M-v2.5 | Locutusque | 2024-09-16T13:56:09Z | 269 | 27 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"computer science",
"en",
"code",
"dataset:open-phi/programming_books_llama",
"dataset:open-phi/textbooks",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-24T01:08:46Z | ---
language:
- en
- code
license: apache-2.0
tags:
- merge
- computer science
datasets:
- open-phi/programming_books_llama
- open-phi/textbooks
inference:
parameters:
do_sample: true
temperature: 0.2
top_p: 0.14
top_k: 12
max_new_tokens: 250
repetition_penalty: 1.15
widget:
- text: 'To calculate the factorial of n, we can use the following function:'
model-index:
- name: TinyMistral-248M-v2.5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 27.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 13.36
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 3.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.11
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.07
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.5
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/TinyMistral-248M-v2.5
name: Open LLM Leaderboard
---
# TinyMistral-248M-v2.5
This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation.
During training, this model reached an average perplexity score of 4, outperforming V1 by nearly 7x, and V2 by 4x.
You can use the following config to reproduce the merged model:
```
base_model: Locutusque/TinyMistral-248M-v2
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 12]
model: Locutusque/TinyMistral-248M
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 12]
model: Locutusque/TinyMistral-248M-v2
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
```
This model can also answer basic questions, without needing to do any fine-tuning.
This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune.
To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248M-v2.5)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.29|
|AI2 Reasoning Challenge (25-Shot)|24.57|
|HellaSwag (10-Shot) |27.49|
|MMLU (5-Shot) |23.15|
|TruthfulQA (0-shot) |46.72|
|Winogrande (5-shot) |47.83|
|GSM8k (5-shot) | 0.00|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__TinyMistral-248M-v2.5)
| Metric |Value|
|-------------------|----:|
|Avg. | 3.87|
|IFEval (0-Shot) |13.36|
|BBH (3-Shot) | 3.18|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 0.11|
|MuSR (0-shot) | 5.07|
|MMLU-PRO (5-shot) | 1.50|
|
Digitoun/model-moore | Digitoun | 2024-09-16T13:37:10Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-09-16T13:36:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ReufGPT-v2-GGUF | mradermacher | 2024-09-16T13:30:06Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"base_model:Zuccx/ReufGPT-v2",
"base_model:quantized:Zuccx/ReufGPT-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-16T11:38:37Z | ---
base_model: Zuccx/ReufGPT-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Zuccx/ReufGPT-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ReufGPT-v2-GGUF/resolve/main/ReufGPT-v2.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DKTech/ICSR_classification_finetuned_mistral_V100_gguf | DKTech | 2024-09-16T13:27:39Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T13:12:16Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** DKTech
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
echarlaix/tiny-random-marian | echarlaix | 2024-09-16T13:26:30Z | 249,454 | 0 | null | [
"pytorch",
"marian",
"license:apache-2.0",
"region:us"
] | null | 2024-09-16T13:25:11Z | ---
license: apache-2.0
---
|
Flocksserver/whisper-large-v3-de-emodb-emotion-classification | Flocksserver | 2024-09-16T13:10:01Z | 40 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:emodb",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-09-16T13:08:16Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- emodb
metrics:
- accuracy
model-index:
- name: whisper-large-v3-de-emodb-emotion-classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: Emo-DB
type: emodb
metrics:
- name: Accuracy
type: accuracy
value: 0.9439252336448598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-de-emodb-emotion-classification
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Emo-DB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3724
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3351 | 1.0 | 214 | 1.1022 | 0.4953 |
| 0.2644 | 2.0 | 428 | 0.7572 | 0.7477 |
| 0.3796 | 3.0 | 642 | 1.0055 | 0.8131 |
| 0.0038 | 4.0 | 856 | 1.0754 | 0.8131 |
| 0.001 | 5.0 | 1070 | 0.5485 | 0.9159 |
| 0.001 | 6.0 | 1284 | 0.5881 | 0.8785 |
| 0.0007 | 7.0 | 1498 | 0.3376 | 0.9439 |
| 0.0006 | 8.0 | 1712 | 0.3592 | 0.9439 |
| 0.0006 | 9.0 | 1926 | 0.3695 | 0.9439 |
| 0.0004 | 10.0 | 2140 | 0.3724 | 0.9439 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
byunal/roberta-base-turkish-uncased-stance | byunal | 2024-09-16T13:08:40Z | 5 | 0 | null | [
"safetensors",
"roberta",
"text-classification",
"tr",
"base_model:burakaytan/roberta-base-turkish-uncased",
"base_model:finetune:burakaytan/roberta-base-turkish-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-09-16T13:04:40Z | ---
license: apache-2.0
language:
- tr
metrics:
- accuracy
- f1
base_model:
- burakaytan/roberta-base-turkish-uncased
pipeline_tag: text-classification
---
# byunal/roberta-base-turkish-uncased-stance

This repository contains a fine-tuned BERT model for stance detection in Turkish. The base model for this fine-tuning is [burakaytan/roberta-base-turkish-uncased](https://huggingface.co/burakaytan/roberta-base-turkish-uncased). The model has been specifically trained on a uniquely collected Turkish stance detection dataset.
## Model Description
- **Model Name**: byunal/roberta-base-turkish-uncased-stance
- **Base Model**: [burakaytan/roberta-base-turkish-uncased](https://huggingface.co/burakaytan/roberta-base-turkish-uncased)
- **Task**: Stance Detection
- **Language**: Turkish
The model predicts the stance of a given text towards a specific target. Possible stance labels include:
- **Favor**: The text supports the target
- **Against**: The text opposes the target
- **Neutral**: The text does not express a clear stance on the target
## Installation
To install the necessary libraries and load the model, run:
```bash
pip install transformers
```
## Usage
Here’s a simple example of how to use the model for stance detection in Turkish:
```bash
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "byunal/roberta-base-turkish-uncased-stance"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example text
text = "Bu konu hakkında kesinlikle karşıyım."
# Tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
# Perform prediction
with torch.no_grad():
outputs = model(**inputs)
# Get predicted stance
predictions = torch.argmax(outputs.logits, dim=-1)
stance_label = predictions.item()
# Display result
labels = ["Favor", "Against", "Neutral"]
print(f"The stance is: {labels[stance_label]}")
```
## Training
This model was fine-tuned using a specialized Turkish stance detection dataset that uniquely reflects various text contexts and opinions. The dataset includes diverse examples from social media, news articles, and public comments, ensuring a robust understanding of stance detection in real-world applications.
- Epochs: 10
- Batch Size: 32
- Learning Rate: 5e-5
- Optimizer: AdamW
## Evaluation
The model was evaluated using Accuracy and Macro F1-score on a validation dataset. The results confirm the model's effectiveness in stance detection tasks in Turkish.
- Accuracy Score: % 79.0
- Macro F1 Score: % 78.0 |
teslov/reranker | teslov | 2024-09-16T13:06:15Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"text-classification",
"reranker",
"cross-encoder",
"transformers.js",
"custom_code",
"multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | text-classification | 2024-09-16T12:49:02Z | ---
pipeline_tag: text-classification
tags:
- transformers
- reranker
- cross-encoder
- transformers.js
language:
- multilingual
inference: false
license: cc-by-nc-4.0
library_name: transformers
---
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# jina-reranker-v2-base-multilingual
## Intended Usage & Model Info
The **Jina Reranker v2** (`jina-reranker-v2-base-multilingual`) is a transformer-based model that has been fine-tuned for text reranking task, which is a crucial component in many information retrieval systems. It is a cross-encoder model that takes a query and a document pair as input and outputs a score indicating the relevance of the document to the query. The model is trained on a large dataset of query-document pairs and is capable of reranking documents in multiple languages with high accuracy.
Compared with the state-of-the-art reranker models, including the previous released `jina-reranker-v1-base-en`, the **Jina Reranker v2** model has demonstrated competitiveness across a series of benchmarks targeting for text retrieval, multilingual capability, function-calling-aware and text-to-SQL-aware reranking, and code retrieval tasks.
The `jina-reranker-v2-base-multilingual` model is capable of handling long texts with a context length of up to `1024` tokens, enabling the processing of extensive inputs. To enable the model to handle long texts that exceed 1024 tokens, the model uses a sliding window approach to chunk the input text into smaller pieces and rerank each chunk separately.
The model is also equipped with a flash attention mechanism, which significantly improves the model's performance.
# Usage
_This model repository is licenced for research and evaluation purposes under CC-BY-NC-4.0. For commercial usage, please refer to Jina AI's APIs, AWS Sagemaker or Azure Marketplace offerings. Please [contact us](https://jina.ai/contact-sales) for any further clarifications._
1. The easiest way to use `jina-reranker-v2-base-multilingual` is to call Jina AI's [Reranker API](https://jina.ai/reranker/).
```bash
curl https://api.jina.ai/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "jina-reranker-v2-base-multilingual",
"query": "Organic skincare products for sensitive skin",
"documents": [
"Organic skincare for sensitive skin with aloe vera and chamomile.",
"New makeup trends focus on bold colors and innovative techniques",
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille",
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken",
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla",
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras",
"针对敏感肌专门设计的天然有机护肤产品",
"新的化妆趋势注重鲜艳的颜色和创新的技巧",
"敏感肌のために特別に設計された天然有機スキンケア製品",
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています"
],
"top_n": 3
}'
```
2. You can also use the `transformers` library to interact with the model programmatically.
Before you start, install the `transformers` and `einops` libraries:
```bash
pip install transformers einops
```
And then:
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
'jinaai/jina-reranker-v2-base-multilingual',
torch_dtype="auto",
trust_remote_code=True,
)
model.to('cuda') # or 'cpu' if no GPU is available
model.eval()
# Example query and documents
query = "Organic skincare products for sensitive skin"
documents = [
"Organic skincare for sensitive skin with aloe vera and chamomile.",
"New makeup trends focus on bold colors and innovative techniques",
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille",
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken",
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla",
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras",
"针对敏感肌专门设计的天然有机护肤产品",
"新的化妆趋势注重鲜艳的颜色和创新的技巧",
"敏感肌のために特別に設計された天然有機スキンケア製品",
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています",
]
# construct sentence pairs
sentence_pairs = [[query, doc] for doc in documents]
scores = model.compute_score(sentence_pairs, max_length=1024)
```
The scores will be a list of floats, where each float represents the relevance score of the corresponding document to the query. Higher scores indicate higher relevance.
For instance the returning scores in this case will be:
```bash
[0.8311430811882019, 0.09401018172502518,
0.6334102749824524, 0.08269733935594559,
0.7620701193809509, 0.09947021305561066,
0.9263036847114563, 0.05834583938121796,
0.8418256044387817, 0.11124119907617569]
```
The model gives high relevance scores to the documents that are most relevant to the query regardless of the language of the document.
Note that by default, the `jina-reranker-v2-base-multilingual` model uses [flash attention](https://github.com/Dao-AILab/flash-attention), which requires certain types of GPU hardware to run.
If you encounter any issues, you can try call `AutoModelForSequenceClassification.from_pretrained()` with `use_flash_attn=False`.
This will use the standard attention mechanism instead of flash attention.
If you want to use flash attention for fast inference, you need to install the following packages:
```bash
pip install ninja # required for flash attention
pip install flash-attn --no-build-isolation
```
Enjoy the 3x-6x speedup with flash attention! ⚡️⚡️⚡️
3. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)!
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library (v3) using:
```bash
npm i xenova/transformers.js#v3
```
Then, you can use the following code to interact with the model:
```js
import { AutoTokenizer, XLMRobertaModel } from '@xenova/transformers';
const model_id = 'jinaai/jina-reranker-v2-base-multilingual';
const model = await XLMRobertaModel.from_pretrained(model_id, { dtype: 'fp32' });
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
/**
* Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
* @param {string} query A single query
* @param {string[]} documents A list of documents
* @param {Object} options Options for ranking
* @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned.
* @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores.
*/
async function rank(query, documents, {
top_k = undefined,
return_documents = false,
} = {}) {
const inputs = tokenizer(
new Array(documents.length).fill(query),
{ text_pair: documents, padding: true, truncation: true }
)
const { logits } = await model(inputs);
return logits.sigmoid().tolist()
.map(([score], i) => ({
corpus_id: i,
score,
...(return_documents ? { text: documents[i] } : {})
})).sort((a, b) => b.score - a.score).slice(0, top_k);
}
// Example usage:
const query = "Organic skincare products for sensitive skin"
const documents = [
"Organic skincare for sensitive skin with aloe vera and chamomile.",
"New makeup trends focus on bold colors and innovative techniques",
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille",
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken",
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla",
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras",
"针对敏感肌专门设计的天然有机护肤产品",
"新的化妆趋势注重鲜艳的颜色和创新的技巧",
"敏感肌のために特別に設計された天然有機スキンケア製品",
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています",
]
const results = await rank(query, documents, { return_documents: true, top_k: 3 });
console.log(results);
```
That's it! You can now use the `jina-reranker-v2-base-multilingual` model in your projects.
In addition to the `compute_score()` function, the `jina-reranker-v2-base-multilingual` model also provides a `model.rerank()` function that can be used to rerank documents based on a query. You can use it as follows:
```python
result = model.rerank(
query,
documents,
max_query_length=512,
max_length=1024,
top_n=3
)
```
Inside the `result` object, you will find the reranked documents along with their scores. You can use this information to further process the documents as needed.
The `rerank()` function will automatically chunk the input documents into smaller pieces if they exceed the model's maximum input length. This allows you to rerank long documents without running into memory issues.
Specifically, the `rerank()` function will split the documents into chunks of size `max_length` and rerank each chunk separately. The scores from all the chunks are then combined to produce the final reranking results. You can control the query length and document length in each chunk by setting the `max_query_length` and `max_length` parameters. The `rerank()` function also supports the `overlap` parameter (default is `80`) which determines how much overlap there is between adjacent chunks. This can be useful when reranking long documents to ensure that the model has enough context to make accurate predictions.
3. Alternatively, `jina-reranker-v2-base-multilingual` has been integrated with `CrossEncoder` from the `sentence-transformers` library.
Before you start, install the `sentence-transformers` libraries:
```bash
pip install sentence-transformers
```
The [`CrossEncoder`](https://sbert.net/docs/package_reference/cross_encoder/cross_encoder.html) class supports a [`predict`](https://sbert.net/docs/package_reference/cross_encoder/cross_encoder.html#sentence_transformers.cross_encoder.CrossEncoder.predict) method to get query-document relevance scores, and a [`rank`](https://sbert.net/docs/package_reference/cross_encoder/cross_encoder.html#sentence_transformers.cross_encoder.CrossEncoder.rank) method to rank all documents given your query.
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder(
"jinaai/jina-reranker-v2-base-multilingual",
automodel_args={"torch_dtype": "auto"},
trust_remote_code=True,
)
# Example query and documents
query = "Organic skincare products for sensitive skin"
documents = [
"Organic skincare for sensitive skin with aloe vera and chamomile.",
"New makeup trends focus on bold colors and innovative techniques",
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille",
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken",
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla",
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras",
"针对敏感肌专门设计的天然有机护肤产品",
"新的化妆趋势注重鲜艳的颜色和创新的技巧",
"敏感肌のために特別に設計された天然有機スキンケア製品",
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています",
]
# construct sentence pairs
sentence_pairs = [[query, doc] for doc in documents]
scores = model.predict(sentence_pairs, convert_to_tensor=True).tolist()
"""
[0.828125, 0.0927734375, 0.6328125, 0.08251953125, 0.76171875, 0.099609375, 0.92578125, 0.058349609375, 0.84375, 0.111328125]
"""
rankings = model.rank(query, documents, return_documents=True, convert_to_tensor=True)
print(f"Query: {query}")
for ranking in rankings:
print(f"ID: {ranking['corpus_id']}, Score: {ranking['score']:.4f}, Text: {ranking['text']}")
"""
Query: Organic skincare products for sensitive skin
ID: 6, Score: 0.9258, Text: 针对敏感肌专门设计的天然有机护肤产品
ID: 8, Score: 0.8438, Text: 敏感肌のために特別に設計された天然有機スキンケア製品
ID: 0, Score: 0.8281, Text: Organic skincare for sensitive skin with aloe vera and chamomile.
ID: 4, Score: 0.7617, Text: Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla
ID: 2, Score: 0.6328, Text: Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille
ID: 9, Score: 0.1113, Text: 新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています
ID: 5, Score: 0.0996, Text: Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras
ID: 1, Score: 0.0928, Text: New makeup trends focus on bold colors and innovative techniques
ID: 3, Score: 0.0825, Text: Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken
ID: 7, Score: 0.0583, Text: 新的化妆趋势注重鲜艳的颜色和创新的技巧
"""
```
# Evaluation
We evaluated Jina Reranker v2 on multiple benchmarks to ensure top-tier performance and search relevance.
| Model Name | Model Size | MKQA(nDCG@10, 26 langs) | BEIR(nDCG@10, 17 datasets) | MLDR(recall@10, 13 langs) | CodeSearchNet (MRR@10, 3 tasks) | AirBench (nDCG@10, zh/en) | ToolBench (recall@3, 3 tasks) | TableSearch (recall@3) |
| :-----------------------------: | :----------: | ------------------------- | ---------------------------- | --------------------------- | --------------------------------- | --------------------------- | ------------------------------- | ------------------------ |
| jina-reranker-v2-multilingual | 278M | 54.83 | 53.17 | 68.95 | 71.36 | 61.33 | 77.75 | 93.31 |
| bge-reranker-v2-m3 | 568M | 54.17 | 53.65 | 59.73 | 62.86 | 61.28 | 78.46 | 74.86 |
| mmarco-mMiniLMv2-L12-H384-v1 | 118M | 53.37 | 45.40 | 28.91 | 51.78 | 56.46 | 58.39 | 53.60 |
| jina-reranker-v1-base-en | 137M | - | 52.45 | - | - | - | 74.13 | 72.89 |
Note:
- NDCG@10 and MRR@10 measure ranking quality, with higher scores indicating better search results
- recall@3 measures the proportion of relevant documents retrieved, with higher scores indicating better search results |
LlamaFinetuneBase/Gemma-2-9B | LlamaFinetuneBase | 2024-09-16T13:05:07Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T09:36:22Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-9b",
device="cuda", # replace with "mps" to run on a Mac device
)
text = "Once upon a time,"
outputs = pipe(text, max_new_tokens=256)
response = outputs[0]["generated_text"]
print(response)
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model "google/gemma-2-9b" --prompt "What is the capital of Mexico?"
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
byunal/convbert-base-turkish-cased-stance | byunal | 2024-09-16T13:04:05Z | 5 | 0 | null | [
"safetensors",
"convbert",
"text-classification",
"tr",
"base_model:dbmdz/convbert-base-turkish-cased",
"base_model:finetune:dbmdz/convbert-base-turkish-cased",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-09-16T13:00:56Z | ---
license: apache-2.0
language:
- tr
metrics:
- accuracy
- f1
base_model:
- dbmdz/convbert-base-turkish-cased
pipeline_tag: text-classification
---
# byunal/convbert-base-turkish-cased-stance

This repository contains a fine-tuned BERT model for stance detection in Turkish. The base model for this fine-tuning is [dbmdz/convbert-base-turkish-cased](https://huggingface.co/dbmdz/convbert-base-turkish-cased). The model has been specifically trained on a uniquely collected Turkish stance detection dataset.
## Model Description
- **Model Name**: byunal/convbert-base-turkish-cased-stance
- **Base Model**: [dbmdz/convbert-base-turkish-cased](https://huggingface.co/dbmdz/convbert-base-turkish-cased)
- **Task**: Stance Detection
- **Language**: Turkish
The model predicts the stance of a given text towards a specific target. Possible stance labels include:
- **Favor**: The text supports the target
- **Against**: The text opposes the target
- **Neutral**: The text does not express a clear stance on the target
## Installation
To install the necessary libraries and load the model, run:
```bash
pip install transformers
```
## Usage
Here’s a simple example of how to use the model for stance detection in Turkish:
```bash
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "byunal/convbert-base-turkish-cased-stance"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example text
text = "Bu konu hakkında kesinlikle karşıyım."
# Tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
# Perform prediction
with torch.no_grad():
outputs = model(**inputs)
# Get predicted stance
predictions = torch.argmax(outputs.logits, dim=-1)
stance_label = predictions.item()
# Display result
labels = ["Favor", "Against", "Neutral"]
print(f"The stance is: {labels[stance_label]}")
```
## Training
This model was fine-tuned using a specialized Turkish stance detection dataset that uniquely reflects various text contexts and opinions. The dataset includes diverse examples from social media, news articles, and public comments, ensuring a robust understanding of stance detection in real-world applications.
- Epochs: 10
- Batch Size: 32
- Learning Rate: 5e-5
- Optimizer: AdamW
## Evaluation
The model was evaluated using Accuracy and Macro F1-score on a validation dataset. The results confirm the model's effectiveness in stance detection tasks in Turkish.
- Accuracy Score: % 83.0
- Macro F1 Score: % 82.0 |
darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp | darkc0de | 2024-09-16T13:00:49Z | 29 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:merge:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:bunnycore/HyperLlama-3.1-8B",
"base_model:merge:bunnycore/HyperLlama-3.1-8B",
"base_model:darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp",
"base_model:merge:darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp",
"base_model:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T08:03:09Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- bunnycore/HyperLlama-3.1-8B
- mlabonne/NeuralDaredevil-8B-abliterated
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp
model-index:
- name: BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 43.58
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.1
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.7
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) as a base.
### Models Merged
The following models were included in the merge:
* [bunnycore/HyperLlama-3.1-8B](https://huggingface.co/bunnycore/HyperLlama-3.1-8B)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated)
* [darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp](https://huggingface.co/darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- model: bunnycore/HyperLlama-3.1-8B
- model: darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp
merge_method: model_stock
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_darkc0de__BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp)
| Metric |Value|
|-------------------|----:|
|Avg. |22.04|
|IFEval (0-Shot) |43.58|
|BBH (3-Shot) |31.87|
|MATH Lvl 5 (4-Shot)|11.10|
|GPQA (0-shot) | 6.49|
|MuSR (0-shot) | 9.49|
|MMLU-PRO (5-shot) |29.70|
|
inflatebot/MN-12B-Mag-Mell-R1-GGUF | inflatebot | 2024-09-16T12:56:33Z | 5,113 | 12 | null | [
"gguf",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:quantized:inflatebot/MN-12B-Mag-Mell-R1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T02:16:56Z | ---
base_model:
- inflatebot/MN-12B-Mag-Mell-R1
base_model_relation: quantized
---
GGUF quants of [MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1).
Q4_K_M, Q6_K, Q8_0 and F16 are available. Let me know if you need more. |
toxsltech/ki-image-v5 | toxsltech | 2024-09-16T12:55:02Z | 6 | 0 | diffusers | [
"diffusers",
"art",
"en",
"dataset:Vivekmaan/ki-images-basketball-home-jersey-v1",
"arxiv:1910.09700",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | null | 2024-09-16T12:35:54Z | ---
license: other
datasets:
- Vivekmaan/ki-images-basketball-home-jersey-v1
language:
- en
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
tags:
- art
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
team-lucid/deberta-v3-base-korean | team-lucid | 2024-09-16T12:47:23Z | 1,371 | 10 | transformers | [
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"deberta-v2",
"deberta-v3",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-30T02:06:36Z | ---
license: apache-2.0
language:
- ko
tags:
- deberta-v3
---
# deberta-v3-base-korean
## Model Details
DeBERTa는 Disentangled Attention과 Enhanced Masked Language Model을 통해 BERT의 성능을 향상시킨 모델입니다.
그중 DeBERTa V3은 ELECTRA-Style Pre-Training에 Gradient-Disentangled Embedding Sharing을 적용하여 DeBERTA를 개선했습니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, DebertaV2ForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("team-lucid/deberta-v3-base-korean")
model = DebertaV2ForSequenceClassification.from_pretrained("team-lucid/deberta-v3-base-korean")
inputs = tokenizer("안녕, 세상!", return_tensors="pt")
outputs = model(**inputs)
```
## Evaluation
| | Backbone<br/>Parameters(M) | **NSMC**<br/>(acc) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) |
|:-------------------|:--------------------------:|:------------------:|:------------------:|:--------------------:|:-------------------------:|:---------------------------:|
| DistilKoBERT | 22M | 88.41 | 62.55 | 70.55 | 73.21 | 92.48 |
| KoBERT | 85M | 89.63 | 80.65 | 79.00 | 79.64 | 93.93 |
| XLM-Roberta-Base | 85M | 89.49 | 82.95 | 79.92 | 79.09 | 93.53 |
| KcBERT-Base | 85M | 89.62 | 66.95 | 74.85 | 75.57 | 93.93 |
| KcBERT-Large | 302M | 90.68 | 70.15 | 76.99 | 77.49 | 94.06 |
| KoELECTRA-Small-v3 | 9.4M | 89.36 | 77.45 | 78.60 | 80.79 | 94.85 |
| KoELECTRA-Base-v3 | 85M | 90.63 | 84.45 | 82.24 | **85.53** | 95.25 |
| Ours | | | | | | |
| DeBERTa-xsmall | 22M | 91.21 | 84.40 | 82.13 | 83.90 | 95.38 |
| DeBERTa-small | 43M | **91.34** | 83.90 | 81.61 | 82.97 | 94.98 |
| DeBERTa-base | 86M | 91.22 | **85.5** | **82.81** | 84.46 | **95.77** |
\* 다른 모델의 결과는 [KcBERT-Finetune](https://github.com/Beomi/KcBERT-Finetune)
과 [KoELECTRA](https://github.com/monologg/KoELECTRA)를 참고했으며, Hyperparameter 역시 다른 모델과 유사하게 설정습니다.
## Model Memory Requirements
| dtype | Largest Layer or Residual Group | Total Size | Training using Adam |
|:-----------------|:----------------------------------|:-------------|:----------------------|
| float32 | 187.79 MB | 513.77 MB | 2.01 GB |
| float16/bfloat16 | 93.9 MB | 256.88 MB | 1.0 GB |
| int8 | 46.95 MB | 128.44 MB | 513.77 MB |
| int4 | 23.47 MB | 64.22 MB | 256.88 MB |
|
teres/ex-ai-assets | teres | 2024-09-16T12:32:27Z | 6 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-16T12:07:57Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EXA_AI
---
# Ex Ai Assets
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EXA_AI` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('teres/ex-ai-assets', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hancheolp/ambiguity-aware-roberta-emotion | hancheolp | 2024-09-16T12:27:39Z | 5 | 0 | null | [
"pytorch",
"roberta",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-09-16T12:21:58Z | ---
license: mit
language:
- en
base_model:
- FacebookAI/roberta-base
---
# Ambiguity-aware RoBERTa
This model is trained on SemEval2007 Task 14 Affective Text dataset and is capable of representing the ambiguity occurring in emotion analysis tasks as an accurate distribution (i.e., softmax output). It was introduced in the following paper: ["Deep Model Compression Also Helps Models Capture Ambiguity"](https://aclanthology.org/2023.acl-long.381.pdf) (ACL 2023).
# Usage
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
tokenizer = RobertaTokenizer.from_pretrained('hancheolp/ambiguity-aware-roberta-emotion')
model = RobertaForSequenceClassification.from_pretrained('hancheolp/ambiguity-aware-roberta-emotion')
news_headline = "Amateur rocket scientists reach for space."
encoded_input = tokenizer(news_headline, return_tensors='pt')
output = model(**encoded_input)
distribution = output.logits.softmax(dim=-1)
```
Each index of the output vector represents the following:
* 0: anger
* 1: disgust
* 2: fear
* 3: joy
* 4: sadness
* 5: surprise |
anti-ai/MedVi-chat-1.5B | anti-ai | 2024-09-16T12:25:49Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"vi",
"base_model:anti-ai/MedVi-chat-1.5B",
"base_model:finetune:anti-ai/MedVi-chat-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T10:14:45Z | ---
base_model: anti-ai/MedVi-chat-1.5B
language:
- vi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** anti-ai
- **License:** apache-2.0
- **Finetuned from model :** anti-ai/MedVi-chat-1.5B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Xu-Ouyang/pythia-2.8b-deduped-int2-step129000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T12:03:31Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T12:03:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
farazfo/ana-text-encoder2 | farazfo | 2024-09-16T12:01:37Z | 24 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-09-16T11:49:00Z | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of sks person
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - farazfo/ana-text-encoder2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
rg1683/fine_tuned_wordpiece_test_NamedEntityRecognition_large | rg1683 | 2024-09-16T11:59:49Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-09-14T18:37:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
laidawang/test_flux_controlnet | laidawang | 2024-09-16T11:51:22Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:openrail++",
"region:us"
] | text-to-image | 2024-09-15T16:58:08Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: openrail++
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-laidawang/test_flux_controlnet
These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning.
You can find some example images below.
prompt: red circle with blue background

prompt: cyan circle with brown floral background

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
farazfo/ana-text-encoder | farazfo | 2024-09-16T11:43:29Z | 27 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-09-16T11:32:42Z | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of sks person
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - farazfo/ana-text-encoder
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Xu-Ouyang/pythia-2.8b-deduped-int2-step115000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T11:41:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T11:40:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/junkjuice_silkysauce | Niggendar | 2024-09-16T11:40:12Z | 123 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-09-16T11:29:12Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
samganguly/bert-resume-classification-model | samganguly | 2024-09-16T11:39:29Z | 47 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-09-16T11:27:00Z | ---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: bert-resume-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-resume-classification-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.0
- Tokenizers 0.19.1
|
RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf | RichardErkhov | 2024-09-16T11:39:13Z | 5 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T07:49:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
text2cypher-demo-16bit - GGUF
- Model creator: https://huggingface.co/tomasonjo/
- Original model: https://huggingface.co/tomasonjo/text2cypher-demo-16bit/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [text2cypher-demo-16bit.Q2_K.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q2_K.gguf) | Q2_K | 2.96GB |
| [text2cypher-demo-16bit.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [text2cypher-demo-16bit.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [text2cypher-demo-16bit.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [text2cypher-demo-16bit.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [text2cypher-demo-16bit.Q3_K.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q3_K.gguf) | Q3_K | 3.74GB |
| [text2cypher-demo-16bit.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [text2cypher-demo-16bit.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [text2cypher-demo-16bit.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [text2cypher-demo-16bit.Q4_0.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q4_0.gguf) | Q4_0 | 4.34GB |
| [text2cypher-demo-16bit.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [text2cypher-demo-16bit.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [text2cypher-demo-16bit.Q4_K.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q4_K.gguf) | Q4_K | 4.58GB |
| [text2cypher-demo-16bit.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [text2cypher-demo-16bit.Q4_1.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q4_1.gguf) | Q4_1 | 4.78GB |
| [text2cypher-demo-16bit.Q5_0.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q5_0.gguf) | Q5_0 | 5.21GB |
| [text2cypher-demo-16bit.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [text2cypher-demo-16bit.Q5_K.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q5_K.gguf) | Q5_K | 5.34GB |
| [text2cypher-demo-16bit.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [text2cypher-demo-16bit.Q5_1.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q5_1.gguf) | Q5_1 | 5.65GB |
| [text2cypher-demo-16bit.Q6_K.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q6_K.gguf) | Q6_K | 6.14GB |
| [text2cypher-demo-16bit.Q8_0.gguf](https://huggingface.co/RichardErkhov/tomasonjo_-_text2cypher-demo-16bit-gguf/blob/main/text2cypher-demo-16bit.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct
datasets:
- tomasonjo/text2cypher-gpt4o-clean
---
# Uploaded model
- **Developed by:** tomasonjo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
**For more information visit [this link](https://github.com/neo4j-labs/text2cypher/tree/main/finetuning/unsloth-llama3#using-chat-prompt-template)**
## Example usage:
Install dependencies. Check [Unsloth documentation](https://github.com/unslothai/unsloth) for specific installation for other environments.
````python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
````
Then you can load the model and use it as inference
```python
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "llama-3",
map_eos_token = True,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
schema = """Node properties: - **Question** - `favorites`: INTEGER Example: "0" - `answered`: BOOLEAN - `text`: STRING Example: "### This is: Bug ### Specifications OS: Win10" - `link`: STRING Example: "https://stackoverflow.com/questions/62224586/playg" - `createdAt`: DATE_TIME Min: 2020-06-05T16:57:19Z, Max: 2020-06-05T21:49:16Z - `title`: STRING Example: "Playground is not loading with apollo-server-lambd" - `id`: INTEGER Min: 62220505, Max: 62224586 - `upVotes`: INTEGER Example: "0" - `score`: INTEGER Example: "-1" - `downVotes`: INTEGER Example: "1" - **Tag** - `name`: STRING Example: "aws-lambda" - **User** - `image`: STRING Example: "https://lh3.googleusercontent.com/-NcFYSuXU0nk/AAA" - `link`: STRING Example: "https://stackoverflow.com/users/10251021/alexandre" - `id`: INTEGER Min: 751, Max: 13681006 - `reputation`: INTEGER Min: 1, Max: 420137 - `display_name`: STRING Example: "Alexandre Le" Relationship properties: The relationships: (:Question)-[:TAGGED]->(:Tag) (:User)-[:ASKED]->(:Question)"""
question = "Identify the top 5 questions with the most downVotes."
messages = [
{"role": "system", "content": "Given an input question, convert it to a Cypher query. No pre-amble."},
{"role": "user", "content": f"""Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:
{schema}
Question: {question}
Cypher query:"""}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
outputs = model.generate(input_ids = inputs, max_new_tokens = 128, use_cache = True)
tokenizer.batch_decode(outputs)
```
|
weecology/deepforest-bird | weecology | 2024-09-16T11:33:16Z | 124,706 | 2 | deepforest | [
"deepforest",
"safetensors",
"license:mit",
"region:us"
] | null | 2024-07-26T18:52:01Z | ---
license: mit
tags:
- deepforest
---
# Bird Detection in RGB Airborne Imagery
The model was initially described in [Ecological Applications](https://esajournals.onlinelibrary.wiley.com/doi/abs/10.1002/eap.2694). From the abstract
>
Using over 250,000 annotations from 13 projects from around the world, we develop a general bird detection model that achieves over 65% recall and 50% precision on novel aerial data without any local training despite differences in species, habitat, and imaging methodology. Fine-tuning this model with only 1000 local annotations increases these values to an average of 84% recall and 69% precision by building on the general features learned from other data sources.
>
|
DeltaIV/llama31-eros-gguf | DeltaIV | 2024-09-16T11:27:53Z | 20 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"base_model:quantized:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T11:20:05Z | ---
base_model: grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** DeltaIV
- **License:** apache-2.0
- **Finetuned from model :** grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
leukas/DeBaby-fullcontr | leukas | 2024-09-16T11:25:27Z | 2,556 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-09-11T15:58:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/arco-75k-GGUF | mradermacher | 2024-09-16T11:22:35Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:appvoid/arco-75k",
"base_model:quantized:appvoid/arco-75k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-16T11:18:32Z | ---
base_model: appvoid/arco-75k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/appvoid/arco-75k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_XS.gguf) | IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_S.gguf) | IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_M.gguf) | IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step110000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T11:19:33Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T11:19:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kort/x2 | Kort | 2024-09-16T11:19:10Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T11:11:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YuanZ77/gemma2b-b | YuanZ77 | 2024-09-16T11:18:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T05:35:54Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-2-2b-it
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: gemma2b-b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-b
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.39 | 0.9180 | 7 | 1.3376 |
| 0.9541 | 1.9672 | 15 | 1.0215 |
| 0.745 | 2.7541 | 21 | 1.0122 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
kodetr/stunting-qa-v1 | kodetr | 2024-09-16T11:13:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T11:02:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
swp-berlin/deberta-base-news-topics-kenia-europe | swp-berlin | 2024-09-16T11:05:56Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"doi:10.57967/hf/3078",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-09-12T16:16:00Z | ---
library_name: transformers
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Paul Bochtler
- **Finetuned from model [optional]:** microsoft/deberta-v3-base
## Uses
The model has been trained on about 700 articles from Kenyan newspapers to detect the presence of the following topics:
- **Coronavirus**: Includes topics related to the outbreak and vaccines.
- **Cultural Cooperation**: Topics covering cultural exchanges and partnerships.
- **Development Cooperation**: Focuses on areas such as agriculture, transport, and renewable energies.
- **Diaspora Affairs/Remittances**: Topics involving the Kenyan diaspora and financial remittances.
- **European Domestic and Regional Politics**: Includes issues such as Brexit and market regulation/standards.
- **Financing/Loans/Debt**: Covers financial aspects including loans and debt management.
- **Global Affairs/International (Geo)politics**: Topics related to international relations and geopolitical dynamics.
- **Kenyan Foreign Policy/Diplomacy**: Focus on Kenya's foreign relations and diplomatic efforts.
- **Regional Affairs/African Politics**: Topics on regional dynamics and African political issues.
- **Social Controversies**: Includes discussions on the colonial past, visa/migration issues, energy justice, and the ICC case.
- **Tourism**: Covers aspects related to the tourism industry.
- **Trade/Investment**: Includes import/export, tenders, and investment projects.
### Direct Use
This model can be directly applied to classify articles based on the above topics, making it suitable for use in media analysis, content categorization, and research on public discourse in Kenyan media.
## Bias, Risks, and Limitations
The model `swp-berlin/deberta-base-news-topics-kenia-europe` was trained on approximately 700 articles from Kenyan newspapers, which may introduce certain biases and limitations:
- **Data Bias**: The model's predictions are influenced by the specific articles and sources used during training, which may reflect the perspectives, biases, and linguistic styles of those publications. This can result in an overrepresentation of certain viewpoints or underrepresentation of others, especially those outside the mainstream media.
- **Cultural and Regional Bias**: Since the training data is centered around Kenyan newspapers, the model may perform better on content related to East African contexts and may not generalize well to other regions or cultural settings.
- **Topic Limitations**: The model is designed to detect specific topics such as global affairs, development cooperation, and social controversies. It may not perform well on texts that fall outside these predefined categories.
- **Risks of Misclassification**: As with any classification model, there is a risk of misclassification, especially with nuanced or ambiguous content. Users should verify model outputs where high accuracy is critical.
- **Ethical Considerations**: Users should be mindful of how the model’s outputs are used, particularly in sensitive contexts such as media analysis or public discourse monitoring, to avoid reinforcing biases or misinformation.
## How to Get Started with the Model
To get started with the model, use the following code snippet:
```python
from transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer
import torch
# Define model path and device
model_name = "swp-berlin/deberta-base-news-topics-kenia-europe"
device = "cuda:0" if torch.cuda.is_available() else "cpu" # Use GPU if available, otherwise CPU
# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True, model_max_length=512)
# Initialize the pipeline for text classification
pipe_classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
framework="pt",
device=device,
batch_size=2
)
# Example usage
result = pipe_classifier("Example text to classify")
print(result)
```
### Training Data
The model was trained on a custom dataset comprising approximately 700 articles from Kenyan newspapers. The dataset includes a variety of topics relevant to Kenyan and international contexts, including health, politics, development, and cultural affairs. Preprocessing involved filtering irrelevant articles and balancing the dataset across the target topics.
### Training Procedure
The model was fine-tuned on a pre-trained `DeBERTa-base` model using the following training configuration:
#### Preprocessing
- Texts were tokenized using the `DeBERTa` tokenizer, with special attention given to splitting sentences and removing noise such as URLs and non-text elements.
#### Training Hyperparameters
- **Hyperparameters**:
- **Learning Rate**: `6e-5`
- **Batch Size**: `8`
- **Epochs**: `20`
- **Gradient Accumulation Steps**: `4`
- **Warm-up Ratio**: `0.06` to gradually ramp up the learning rate at the start of training
- **Weight Decay**: `0.01` to regularize the model and prevent overfitting
- **Evaluation Strategy**: Evaluation was performed at the end of each epoch, with the best model based on `f1_macro` score retained.
Training was conducted on a GPU environment to optimize performance and speed. The training script utilized Hugging Face's `Trainer` class for efficient model management and evaluation.
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step107000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T10:55:57Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T10:53:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodingQueen13/whisper-tiny-en | CodingQueen13 | 2024-09-16T10:55:46Z | 61 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-09-16T09:58:23Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3282172373081464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6626
- Wer Ortho: 0.3270
- Wer: 0.3282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0006 | 17.8571 | 500 | 0.6626 | 0.3270 | 0.3282 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF | mradermacher | 2024-09-16T10:52:53Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T09:34:08Z | ---
base_model: Dans-DiscountModels/Dans-Instruct-Mix-8b-ChatML-V0.0.2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Dans-DiscountModels/Dans-Instruct-Mix-8b-ChatML-V0.0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dans-Instruct-Mix-8b-ChatML-V0.0.2-GGUF/resolve/main/Dans-Instruct-Mix-8b-ChatML-V0.0.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DeltaIV/llama31-eros | DeltaIV | 2024-09-16T10:52:23Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"base_model:finetune:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T10:47:34Z | ---
base_model: grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** DeltaIV
- **License:** apache-2.0
- **Finetuned from model :** grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF | mradermacher | 2024-09-16T10:51:31Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NobleMathews/SmolLM-135M-GrammarSFT-DPO",
"base_model:quantized:NobleMathews/SmolLM-135M-GrammarSFT-DPO",
"endpoints_compatible",
"region:us"
] | null | 2024-09-16T10:49:46Z | ---
base_model: NobleMathews/SmolLM-135M-GrammarSFT-DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NobleMathews/SmolLM-135M-GrammarSFT-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TopperThijs/merge_test | TopperThijs | 2024-09-16T10:44:09Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-09-04T08:54:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NAITIVEAIAGENCY/NAITIVRLORA | NAITIVEAIAGENCY | 2024-09-16T10:41:02Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-15T22:10:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NAITIVECHRIS
---
# Naitivrlora
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NAITIVECHRIS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Skalinator/NAITIVRLORA', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Hajarhammadi/gpt2_finetuned | Hajarhammadi | 2024-09-16T10:40:23Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T13:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergiopaniego/detr-resnet-50-dc5-fashionpedia-finetuned | sergiopaniego | 2024-09-16T10:32:06Z | 55 | 0 | null | [
"safetensors",
"detr",
"object-detection",
"dataset:detection-datasets/fashionpedia",
"base_model:facebook/detr-resnet-50-dc5",
"base_model:finetune:facebook/detr-resnet-50-dc5",
"region:us"
] | object-detection | 2024-09-02T11:24:20Z | ---
datasets:
- detection-datasets/fashionpedia
base_model:
- facebook/detr-resnet-50-dc5
pipeline_tag: object-detection
--- |
Ba2han/Pretraining_Turkish-test-5500s | Ba2han | 2024-09-16T10:23:11Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"tr",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T08:39:23Z | ---
license: mit
language:
- tr
library_name: transformers
---
Pretrained on 1B (mostly Turkish) tokens from HF and "high quality" scraped data using 1 RTX 3090. The training will continue. The model already can produce sensible sentences in Turkish.
___________________________
HF kaynaklı ve scrape edilen 1 Milyar (çoğunlukla Türkçe) token ile 1 RTX 3090 kullanılarak eğitilmiştir. Eğitim devam edecek. Model şimdiden düzgün Türkçe cümleler kurabiliyor:

(top_k=24, repetition_penalty=1.1, temperature=0.12, seed=1022) |
bartowski/reader-lm-1.5b-GGUF | bartowski | 2024-09-16T10:16:35Z | 1,335 | 14 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation",
"multilingual",
"base_model:jinaai/reader-lm-1.5b",
"base_model:quantized:jinaai/reader-lm-1.5b",
"license:cc-by-nc-4.0",
"region:us",
"conversational"
] | text-generation | 2024-09-11T21:47:15Z | ---
base_model: jinaai/reader-lm-1.5b
language:
- multilingual
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: bartowski
inference: false
---
## Llamacpp imatrix Quantizations of reader-lm-1.5b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3715">b3715</a> for quantization.
Original model: https://huggingface.co/jinaai/reader-lm-1.5b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [reader-lm-1.5b-f16.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-f16.gguf) | f16 | 3.09GB | false | Full F16 weights. |
| [reader-lm-1.5b-Q8_0.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q8_0.gguf) | Q8_0 | 1.65GB | false | Extremely high quality, generally unneeded but max available quant. |
| [reader-lm-1.5b-Q6_K_L.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q6_K_L.gguf) | Q6_K_L | 1.33GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [reader-lm-1.5b-Q6_K.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q6_K.gguf) | Q6_K | 1.27GB | false | Very high quality, near perfect, *recommended*. |
| [reader-lm-1.5b-Q5_K_L.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q5_K_L.gguf) | Q5_K_L | 1.18GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [reader-lm-1.5b-Q5_K_M.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q5_K_M.gguf) | Q5_K_M | 1.13GB | false | High quality, *recommended*. |
| [reader-lm-1.5b-Q5_K_S.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q5_K_S.gguf) | Q5_K_S | 1.10GB | false | High quality, *recommended*. |
| [reader-lm-1.5b-Q4_K_L.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_K_L.gguf) | Q4_K_L | 1.04GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [reader-lm-1.5b-Q4_K_M.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_K_M.gguf) | Q4_K_M | 0.99GB | false | Good quality, default size for must use cases, *recommended*. |
| [reader-lm-1.5b-Q4_K_S.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_K_S.gguf) | Q4_K_S | 0.94GB | false | Slightly lower quality with more space savings, *recommended*. |
| [reader-lm-1.5b-Q4_0.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_0.gguf) | Q4_0 | 0.94GB | false | Legacy format, generally not worth using over similarly sized formats |
| [reader-lm-1.5b-Q3_K_XL.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q3_K_XL.gguf) | Q3_K_XL | 0.94GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [reader-lm-1.5b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_0_8_8.gguf) | Q4_0_8_8 | 0.93GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [reader-lm-1.5b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_0_4_8.gguf) | Q4_0_4_8 | 0.93GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [reader-lm-1.5b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q4_0_4_4.gguf) | Q4_0_4_4 | 0.93GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [reader-lm-1.5b-IQ4_XS.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-IQ4_XS.gguf) | IQ4_XS | 0.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [reader-lm-1.5b-Q3_K_L.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-Q3_K_L.gguf) | Q3_K_L | 0.88GB | false | Lower quality but usable, good for low RAM availability. |
| [reader-lm-1.5b-IQ3_M.gguf](https://huggingface.co/bartowski/reader-lm-1.5b-GGUF/blob/main/reader-lm-1.5b-IQ3_M.gguf) | IQ3_M | 0.78GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/reader-lm-1.5b-GGUF --include "reader-lm-1.5b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/reader-lm-1.5b-GGUF --include "reader-lm-1.5b-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (reader-lm-1.5b-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
xinsir/controlnet-tile-sdxl-1.0 | xinsir | 2024-09-16T10:14:19Z | 25,250 | 213 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-06-26T15:27:48Z | ---
license: apache-2.0
pipeline_tag: text-to-image
---
# ***ControlNet Tile SDXL***

# Image Deblur Example(Repaint Detail)





# Image Variation Example(like midjourney)





# Image Super-resolution(like realESRGAN)
support any aspect ratio and any times upscale, followings are 3 * 3 times




# Code to Use Tile blur
code reference: https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic/blob/main/TTP_tile_preprocessor_v5.py
https://github.com/lllyasviel/ControlNet-v1-1-nightly/blob/main/gradio_tile.py
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
from guided_filter import FastGuidedFilter # I have upload this file in this repo
import torch
import numpy as np
import cv2
def resize_image_control(control_image, resolution):
HH, WW, _ = control_image.shape
crop_h = random.randint(0, HH - resolution[1])
crop_w = random.randint(0, WW - resolution[0])
crop_image = control_image[crop_h:crop_h+resolution[1], crop_w:crop_w+resolution[0], :]
return crop_image, crop_w, crop_h
def apply_gaussian_blur(image_np, ksize=5, sigmaX=1.0):
if ksize % 2 == 0:
ksize += 1 # ksize must be odd
blurred_image = cv2.GaussianBlur(image_np, (ksize, ksize), sigmaX=sigmaX)
return blurred_image
def apply_guided_filter(image_np, radius, eps, scale):
filter = FastGuidedFilter(image_np, radius, eps, scale)
return filter.filter(image_np)
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio), int(height * ratio)
crop_w, crop_h = 0, 0
controlnet_img = cv2.resize(controlnet_img, (W, H))
blur_strength = random.sample([i / 10. for i in range(10, 201, 2)], k=1)[0]
radius = random.sample([i for i in range(1, 40, 2)], k=1)[0]
eps = random.sample([i / 1000. for i in range(1, 101, 2)], k=1)[0]
scale_factor = random.sample([i / 10. for i in range(10, 181, 5)], k=1)[0]
if random.random() > 0.5:
controlnet_img = apply_gaussian_blur(controlnet_img, ksize=int(blur_strength), sigmaX=blur_strength / 2)
if random.random() > 0.5:
# Apply Guided Filter
controlnet_img = apply_guided_filter(controlnet_img, radius, eps, scale_factor)
# Resize image
controlnet_img = cv2.resize(controlnet_img, (int(W / scale_factor), int(H / scale_factor)), interpolation=cv2.INTER_AREA)
controlnet_img = cv2.resize(controlnet_img, (W, H), interpolation=cv2.INTER_CUBIC)
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
images = pipe(
prompt,
negative_prompt=negative_prompt,
image=controlnet_img,
controlnet_conditioning_scale=controlnet_conditioning_scale,
width=new_width,
height=new_height,
num_inference_steps=30,
).images
images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
```
# Code to Use Tile var
Use more detail prompt to regerate can help!
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
import torch
import numpy as np
import cv2
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio), int(height * ratio)
crop_w, crop_h = 0, 0
controlnet_img = cv2.resize(controlnet_img, (W, H))
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
images = pipe(
prompt,
negative_prompt=negative_prompt,
image=controlnet_img,
controlnet_conditioning_scale=controlnet_conditioning_scale,
width=new_width,
height=new_height,
num_inference_steps=30,
).images
images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
```
# Code to Use Tile super
performance may unstable and next version is optimizing!
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
import torch
import numpy as np
import cv2
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio) // 48 * 48, int(height * ratio) // 48 * 48
controlnet_img = cv2.resize(controlnet_img, (W, H))
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
target_width = W // 3
target_height = H // 3
for i in range(3): # 两行
for j in range(3): # 两列
left = j * target_width
top = i * target_height
right = left + target_width
bottom = top + target_height
# 根据计算的边界裁剪图像
cropped_image = controlnet_img.crop((left, top, right, bottom))
cropped_image = cropped_image.resize((W, H))
images.append(cropped_image)
seed = random.randint(0, 2147483647)
generator = torch.Generator('cuda').manual_seed(seed)
result_images = []
for sub_img in images:
new_width, new_height = W, H
out = pipe(prompt=[prompt]*1,
image=sub_img,
control_image=sub_img,
negative_prompt=[negative_prompt]*1,
generator=generator,
width=new_width,
height=new_height,
num_inference_steps=30,
crops_coords_top_left=(W, H),
target_size=(W, H),
original_size=(W * 2, H * 2),
)
result_images.append(out.images[0])
new_im = Image.new('RGB', (new_width*3, new_height*3))
# 拼接图片到新的图像上
new_im.paste(result_images[0], (0, 0))
new_im.paste(result_images[1], (new_width, 0))
new_im.paste(result_images[2], (new_width * 2, 0))
new_im.paste(result_images[3], (0, new_height))
new_im.paste(result_images[4], (new_width, new_height))
new_im.paste(result_images[5], (new_width * 2, new_height))
new_im.paste(result_images[6], (0, new_height * 2))
new_im.paste(result_images[7], (new_width, new_height * 2))
new_im.paste(result_images[8], (new_width * 2, new_height * 2))
new_im.save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
``` |
MICS-Lab/novae-brain-0 | MICS-Lab | 2024-09-16T10:13:48Z | 23 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-09-10T11:57:18Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
roryvernon/Megan-Flux | roryvernon | 2024-09-16T10:09:37Z | 9 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-16T09:46:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MeganWilson
---
# Megan Flux
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MeganWilson` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('roryvernon/Megan-Flux', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
EIRTHAIMED/Llama-3.1-EIRAI-8B | EIRTHAIMED | 2024-09-16T10:09:33Z | 173 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"text-generation-inference",
"llama-3.1",
"finetuning",
"conversational",
"th",
"en",
"arxiv:2409.08523",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-09T03:47:29Z | ---
language:
- th
- en
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- medical
- text-generation-inference
- llama-3.1
- finetuning
license: llama3.1
library_name: transformers
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66bf1cd096583c59b024a3c5/oG16EyLMfyiqvXrbNPGZd.png" alt="Logo_Website" width="400"/>
</p>
# **Thai Medical Large Language Model**
**Github** : [Github Evaluate](https://github.com/EIRAI-Thaimedical/EIRAI)<br>
**PaPer** : <br>
## **Llama-3.1-EIRAI-8B-instruct**
**Llama-3.1-EIRAI-8B-instruct**: developed an **8-billion parameter model** specifically tailored for **Thai medical applications**, with expertise in both **Thai medical language** and **English medical terminology**. The model has demonstrated its capabilities through key benchmarks such as **MMLU**, **MedQA**, **PubMedQA**, and **MedMCQA**, as well as Thai language assessments like **ThaiExam**, **M3Exam**, **XNLI**, and **XCOPA**. Additionally, we have created a **Clinically Adapted Model Enhanced test** using the **Thai language** to support **clinical use in hospitals** and to further improve the performance of **Thai medical Retrieval-Augmented Generation (RAG)**.
## Notice
While **Eir AI Thai Medical LLM** is designed to encode high-quality medical knowledge, it is **not yet optimized for safe, practical use** in real-world medical settings. The model is still in the research phase and should **not be used for clinical decision-making** without further validation, including randomized controlled trials. It is available for researchers to explore the potential of LLMs in medical contexts, but **real-world deployment is not recommended** in its current version.
## Safety and Future Work
The current version of **Eir AI Thai Medical LLM** is under active development. We advise against using it for medical applications until further testing is completed. Our goal is to continue enhancing the model through **rigorous testing** and **real-world evaluation**, ensuring that it can be safely integrated into healthcare systems in the future.
## Model Overview
- **Model Architecture:** Meta-Llama-3.1-8B-Instruct
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
### Evaluations
| Medical Model | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | PubMedQA | MedMCQA | Avg. |
|--------------------------|---------------------|---------------------|--------------------|--------------------|--------------------|--------------------|-------------------|-------------------|-------------------|-------------------|
| **GPT-3.5 Turbo 1106** | 74.7 | 60.2 | 65.9 | 72.0 | 64.73 | 64.73 | 57.71 | 72.66 | 66.0 | 66.6 |
|Thai LLMs | | | | | | | | | | |
| **Eir AI-8B** | 75.1 | 80.0 | 69.6 | 76.8 | 77.1 | 66.5 | 64.5 | **79.0** | 58.6 | 71.9 |
| **Eir AI-8B + Prob** | **83.8** | **89.0** | **83.0** | **84.9** | **89.6** | **75.7** | **69.6** | 78.8 | **67.1** | **80.2** |
| **Typhoon-v1.5x-8B** | 75.9 | 79.0 | 63.7 | 70.6 | 77.1 | 63.6 | 59.7 | 74.4 | 58.0 | 69.1 |
| **OpenThaiGPT-beta-7B** | 37.4 | 38.0 | 4.5 | 32.7 | 36.1 | 32.4 | 32.4 | 62.0 | 31.8 | 34.1 |
## Translation Performance Metrics
| **Model** | **BLEU Score** | **N-gram Precisions (%)** | **BP** | **Ratio** |
|-------------------------------|----------------|---------------------------------|---------|-----------|
| Typhoon-v1.5x-8B-Instruct | 34.42 | 71.3/50.6/38.6/29.6 | 0.764 | 0.788 |
| Meta Llama 3.1-8B Instruct | 35.74 | 62.8/42.3/31.7/24.1 | 0.946 | 0.948 |
| **Eir AI-8B** | **61.10** | **76.1/64.6/56.6/50.1** | **1.000**| **1.006** |
| Eir AI-8B-prob | 47.91 | 74.0/58.0/48.2/40.6 | 0.890 | 0.896 |
## Clinically Adapted Thai Medical Task Performance
| Task | GPT-3.5 | Typhoon-v1.5x-8B-instruct | GPT-4o | Eir AI-8B |
|----------------------------------------|---------|----------------------------|--------|-----------|
| Named Entity Recognition | 3.26 | 5.55 | 6.34 | **7.08** |
| Temporal Information Extraction | 3.83 | 5.46 | 6.15 | **7.05** |
| Paraphrasing | 2.36 | 4.68 | 6.35 | **7.06** |
| Natural Language Generation | 2.63 | 4.87 | 6.91 | **7.66** |
| Keyword Extraction | 2.60 | 5.15 | 7.01 | **7.35** |
| Text Classification | 2.92 | 6.21 | 5.36 | **6.75** |
| Relation Extraction | 3.29 | 5.94 | 4.37 | **6.92** |
| Question Answering | 3.70 | 4.92 | 6.11 | **6.82** |
| Text Summarization | 2.98 | 5.44 | **7.51**| **7.51** |
| Abbreviation Expansion | 3.99 | 5.96 | 6.24 | **7.82** |
| Clinical Concept Normalization | 2.67 | 5.63 | 5.82 | **6.55** |
| Open-ended Question | 3.32 | 5.55 | 6.77 | **7.27** |
| Multiple-Choice Question | 3.90 | 5.00 | 5.40 | **6.40** |
| Coreference Resolution | 3.48 | 4.55 | 4.88 | **6.43** |
| Yes/No Question | 2.71 | 5.86 | 4.86 | **7.38** |
| Medical Translation | 3.00 | 4.00 | **7.79**| 7.65 |
| Medical Thai Extraction | 2.81 | 7.16 | **8.62**| 8.16 |
| Medical ICD Prediction | 2.08 | 3.16 | **8.12**| 6.41 |
| **Average Score** | 3.05 | 5.33 | 6.38 | **7.11** |
# Prompt Template
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
````
# Example Clinical Adapted ICD 10 Prediction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are responsible for accurately assigning ICD-10 codes and to diagnose and document medical records.
Your expertise ensures that healthcare providers are properly reimbursed and that patient care is well-documented.
In this scenario, you will be presented with a series of medical records and your task is to provide the correct ICD-10 code(s)
and ICD-9 CM in procedures based on the information provided.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
"Chief Complaint :5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว \r\n
Present illness : 5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว มีน้ำมูก เลือดกำเดาจาากข้างขวา
ปฏิการกระทบกระแทก ไม่มีเจ็บคอ ไม่มีอาการอ่อนเพลีย มีอาการอ่อนเพลีย ไอมาก ไอตลอด มีอาการระคายคอ ปัสสาวะปกติ ไม่มีถ่ายเหลว
\r\n\r\nAllergy : |\r\n\r\nOther : no underlying disease\r\n\r\nPlan Treatment Day 1 of hospitalization : admit ward
\r\n\r\nReview of System { \r\n\r\n General :a thai adult female ,look sickness fatigue dry lip moderate dehydration
\r\n Skin :no MP rash \r\n Eyes :not pale ,no icteric sclera \r\n Chest :secretion sound in both lung ,no crepitation , no wheezing \r
\n }
VitalSign First : {\n
BP : 117.0/63.0 mmHg\n
Pulse : 62.0 BPm\n
Temperature : 37.0 Celsius\n
Respiratory rate : 20.0\n
Weight : 50.000 kgs.\n
Height : 165.0 cm.\n
Painscore: N/A\n
O2SAT : 100\n}\n
Lab Results: \n
Electrolyte:Sodium (Na), Result : 143 mmol/L\r\n
Electrolyte:Potassium (K),Result : 3.8 mmol/L\r\n
Electrolyte:Chloride (Cl), Result : 108 mmol/L\r\n
Electrolyte:Bicarbonate (CO2),Result : 27.0 mmol/L\r\n
Creatinine (Serum):Creatinine, Result : 0.69 mg/dL\r\n
Creatinine (Serum):eGFR,Result : 100.41 ml/min/1.73 m^2\r\n
AST/SGOT:AST/SGOT, Result : 48 U/L\r\n
ALT/SGPT:ALT/SGPT, Result : 42 U/L\r\n
CBC:WBC Count,Result : 3.2 10^3/uL\r\n
CBC:RBC Count, Result : 3.57 10^6/uL\r\n
CBC:Hemoglobin (Hb), Result : 10.7 g/dL\r\n
CBC:Hematocrit (HCT),Result : 32.4 %\r\n
CBC:MCV, Result : 91 fL\r\n
CBC:MCH, Result : 30.0 pg\r\n
CBC:MCHC, Result : 33.0 g/dL\r\n
CBC:RDW-CV,Result : 12.9 %\r\n
CBC:Platelet Count, Result : 226 10^3/uL\r\n
CBC:Platelet Estimates, Result : Adequate \r\n
CBC:Neutrophils,Result : 41 %\r\n
CBC:Lymphocytes, Result : 46 %\r\n
CBC:Monocytes, Result : 9 %\r\n
CBC:Eosinophils, Result : 4 %\r\n
CBC:Basophils,Result : 0 %\r\n
COVID-19 Antigen (Rapid Test) For Admit:COVID-19 Antigen (Rapid Test), Result : Positive \r\n
Influenza A/B screening:Flu A,Result : Negative \r\n
Influenza A/B screening:Flu B, Result : Negative \n
X-ray Results: 15-05-2567 12:56 CHEST (PA, UPRIGHT)\r\n
\r\nHISTORY: Fever with productive cough and dyspnea for 5 days. \r\nCOMPARISON: None.\r\n
\r\n
FINDINGS:\r\n
TUBES/LINES: None.\r\nLUNGS: Normal lung volumes. Band-like opacity at right apical lung. Clear left lung.\r\n
PLEURA: No pneumothorax or effusion. Mild bilateral apical pleural thickening. \r\n
HEART AND MEDIASTINUM: Normal cardiac silhouette. Normal mediastinal contour.\r\n
BONY STRUCTURES: Mild scoliosis of the thoracic spine. \r\n\r\n
IMPRESSION:\r\n
- Band-like opacity at right apical lung, probably plate atelectasis or fibrosis\r\n- Mild bilateral apical pleural thickening"
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Let's analyze the provided information and document the appropriate ICD-10 codes based on the patient's symptoms and diagnosis.
### Step-by-Step Analysis:
1. **Understand the Symptoms and Diagnosis**:
- The patient presents with a range of symptoms including fever, cough with sputum, fatigue, nasal discharge (including blood),
and dyspnea. The lab tests returned a positive COVID-19 antigen test, while the influenza test was negative.
2. **Identify Key Terms**:
- Primary key terms include 'COVID-19 positive', 'fever', 'productive cough', 'dyspnea', and pulmonary findings from the X-ray.
3. **Locate the Main Category**:
- The main categories for the conditions noted are:
- COVID-19: U07.1
- Possible acute bronchitis: J20 (based on the productive cough)
- Other symptoms such as dizziness: R42
- Evidence of acute respiratory infection: J06.9
4. **Identify Specific Codes**:
- Based on the diagnosis:
- COVID-19 confirmed case: U07.1
- Acute bronchitis: J20.9 (considering unspecified acute bronchitis)
- Dizziness (if noted as significant and problematic): R42
- Unspecified acute upper respiratory infection: J06.9
5. **Check for Additional Codes or Modifiers**:
- Considering the details provided, the primary diagnosis is the confirmed COVID-19 infection, while bronchiolitis
and dizziness might be secondary diagnoses.
6. **Validate the Code Selection**:
- Confirm that the selected codes reflect the patient's symptoms accurately, including acute bronchitis
and any other significant symptoms presented.
7. **Document the ICD-10 Code**:
- The ICD-10 codes are documented for all noted conditions accordingly."
```json
Answer: {
"Primary_Diagnosis": {
"Code": "U071",
"Description": "COVID-19, virus identified"
},
"Secondary_Diagnosis": [
{
"Code": "J20",
"Description": "Acute bronchitis due to unspecified organism"
},
{
"Code": "R42",
"Description": "Dizziness and giddiness"
},
{
"Code": "J069",
"Description": "Acute upper respiratory infection, unspecified"
}
],
"ICD9_Procedure": []
}
````
# Example Clinical Adapted Thai Medical Extraction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Task : Extract input the following patient information into output format Tone: the following medical text into
Thai in a fluent and elegant style.
Output Format.1.Age: \n2.Gender: \n3.Weight :\n4.Height : \n5.Chief Complaint: \n6.Symptoms and Signs: \n7.Medical History: \n
8.Current Medications: \n9.Laboratory Results: \n10.Imaging Findings: \n11.Allergy: \n12.Drug Allergy:
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
ผู้ป่วยของเราเป็นชายถนัดทั้งสองมือ อายุ 43 ปี มีประวัติการชักที่ไม่สามารถควบคุมได้มาเป็นเวลา 20 ปี ลักษณะการชักของเขามักจะรวมถึงการรู้สึกร้อนวูบวาบและอาการทางประสาทสัมผัสอื่น ๆ
ที่พัฒนาไปสู่การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา การตรวจหาสาเหตุของการชักรวมถึงการถ่ายภาพด้วยคลื่นแม่เหล็กไฟฟ้า (MRI) ซึ่งเผยให้เห็นเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง
การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน และรอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล (glial neoplasm) ตามลักษณะภาพถ่ายทางรังสี
รอยโรคในสมองส่วนหน้าซ้ายด้านหน้าและตรงกลางประกอบด้วยการกลายเป็นหินปูนแบบเป็นก้อนพร้อมการเพิ่มขึ้นของสัญญาณ FLAIR ที่กว้างขวางซึ่งเกี่ยวข้องกับไจรัสซิงกูเลตทั้งสองข้างและสมองส่วนหน้าซ้าย
(รูปที่ ).\n\nการจัดการทางการแพทย์ล้มเหลวในการควบคุมการชักของเขาและเขาถูกส่งต่อเพื่อหาทางเลือกในการรักษาด้วยการผ่าตัด รอยโรคที่เพิ่มขึ้นถูกสังเกตด้วยการถ่ายภาพเพิ่มเติมและขอบเขตของอาการบวมน้ำก็เพิ่มขึ้นด้วย
ความกังวลเกี่ยวกับการพัฒนาเนื้องอกกลีอาลที่เพิ่มขึ้นและการควบคุมการชักที่ไม่ดีทำให้มีการแนะนำให้ทำการผ่าตัด
การตัดสินใจถูกทำขึ้นเพื่อดำเนินการผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนโดยใช้การตรวจคลื่นไฟฟ้าสมองระหว่างการผ่าตัด
(intraoperative electroencephalogram - EEG), การทำแผนที่คอร์ติคอล (cortical mapping) และการตรวจวัดศักย์ไฟฟ้าที่เกิดจากการกระตุ้นประสาทรับความรู้สึก
(somatosensory evoked potentials - SSEP)\n\nตัวอย่างที่ส่งไปตรวจทางพยาธิวิทยาแบบแช่แข็งในระหว่างการผ่าตัดพบว่ามีเส้นใยโรเซนธาล (Rosenthal fibers)
และการกลายเป็นหินปูนแบบเป็นจุดซึ่งคาดว่าจะเป็นเนื้องอกกลีอาล การประเมินทางพยาธิวิทยาแบบถาวรเผยให้เห็นเนื้องอกไขมัน (lipoma) และความผิดปกติของคอร์ติคอลแบบเฉพาะจุด
(focal cortical dysplasia) แบบ Palmini Type IA ในสมองที่อยู่ใกล้เคียง ความผิดปกติเล็กน้อยของโครงสร้างคอร์ติคอลและการเกิดกลีโอซิส (gliosis)
ในเนื้อสมองขาวที่เกี่ยวข้องสามารถเห็นได้ในคราบสีฮีมาโทซิลินและอีโอซิน (hematoxylin and eosin - H&E) และคราบสีโปรตีนกรดกลีอาลไฟบริลลารี (glial fibrillary acidic protein - GFAP)
และการย้อมสีโปรตีนเส้นประสาท (neurofilament protein - NFP) ในบริเวณที่เกิดกลีโอซิสไม่แสดงหลักฐานของเซลล์ประสาทที่ผิดรูป เซลล์ประสาทขนาดใหญ่ หรือเซลล์ลูกโป่ง (รูปที่ ).\n\n
การถ่ายภาพหลังการผ่าตัด (รูปที่ ) แสดงให้เห็นการตัดรอยโรคที่เพิ่มขึ้นใกล้เคียงทั้งหมดในสมองส่วนหน้าซ้ายและไจรัสซิงกูเลต การลดลงอย่างมากของอาการบวมน้ำที่เกิดจากหลอดเลือดถูกสังเกตในภาพ
FLAIR เนื้องอกไขมันที่เส้นกลางสมองไม่เปลี่ยนแปลง\n\nการฟื้นตัวหลังการตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลางรวมถึงอาการอ่อนแรงครึ่งซีก การพูดล่าช้า และการเปลี่ยนแปลงทางอารมณ์
การเปลี่ยนแปลงเหล่านี้ค่อยๆ หายไปและพบว่าผู้ป่วยมีความสมบูรณ์ทางระบบประสาทในการติดตามผลที่คลินิก การควบคุมการชักเกิดขึ้นทันทีหลังการตัด ยาต้านการชักถูกใช้อย่างต่อเนื่องและค่อยๆ
ถูกถอดออกในเดือนต่อมาโดยไม่มีการกลับมาเป็นซ้ำ ผู้ป่วยไม่มีการชักเป็นเวลา 3 ปีหลังการตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
1. อายุ: 43 ปี\n
2. เพศ: ชาย\n
3. น้ำหนัก: ไม่ระบุ\n
4. ส่วนสูง: ไม่ระบุ\n
5. อาการหลัก: การชักที่ไม่สามารถควบคุมได้\n
6. อาการและสัญญาณ: รู้สึกร้อนวูบวาบ, อาการทางประสาทสัมผัส, การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา\n
7. ประวัติการรักษา: การจัดการทางการแพทย์ล้มเหลวในการควบคุมการชัก\n
8. ยาที่ใช้ในปัจจุบัน: ยาต้านการชัก\n9. ผลการตรวจทางห้องปฏิบัติการ: ไม่ระบุ\n
10. ผลการตรวจภาพ: MRI แสดงเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง, การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน, รอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล\n
11. อาการแพ้: ไม่ระบุ\n
12. อาการแพ้ยา: ไม่ระบุ\n
การผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนได้ถูกดำเนินการ
และผลการตรวจทางพยาธิวิทยาเผยให้เห็นว่าเป็นเนื้องอกไขมันและความผิดปกติของคอร์ติคอลแบบเฉพาะจุด (focal cortical dysplasia) แบบ Palmini Type IA
ในสมองที่อยู่ใกล้เคียง การฟื้นตัวหลังการผ่าตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลาง แต่การควบคุมการชักเกิดขึ้นทันทีหลังการผ่าตัดและผู้ป่วยไม่มีการชักเป็นเวลา
3 ปีหลังการผ่าตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ ที่เพิ่มขึ้นของการเพิ่มขึ้นใกล้เคียงที่ไม่เปลี่ยนแปลง.
````
# How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
model_id = "EIRTHAIMED/Llama-3.1-EIRAI-8B"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the base model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
# quantization_config=nf4_config, # uncomment this line for 4 bit loading
device_map="auto",
attn_implementation="flash_attention_2"
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named EIR , developed by EIR Thai Medical LLM. You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "การใช้ clinical tracer มีบทบาทอย่างไรในการพัฒนาคุณภาพการดูแลผู้ป่วย?"}
]
input = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input, streamer = text_streamer, max_new_tokens = 1500, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
```
```
@article{EirAI,
title={Eir: Thai Medical Large Language Models},
author={Yutthakorn Thiprak and Rungtam Ngodngamthaweesuk and Songtam Ngodngamtaweesuk, MD},
year={2024},
journal={arXiv preprint arXiv:2409.08523},
url={https://arxiv.org/abs/2409.08523}
}
```
---
**Thank you very much**
Asst.Prof.Dr. Ekapol Chuangsuwanich and Praj Bhargava @Meta Research Engineer, for your valuable endorsement of our preprint paper on arXiv.
**Thank you**
Draft Reviewer Report
[Kullawat Chaowanawatee](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0051) and [Dr. Jakapan Suaboot](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0056) from Prince of Songkla University, Phuket Campus
<br>
Draft Industry Reviewer Report
[Mr. Piyawat Maneenual](https://ieeexplore.ieee.org/author/37086452350) ,Assistant IT Manager ,Thonburi Rajyindee Hospital<br>
|
MohsenABG/speecht5_finetuned_emirhan_tr | MohsenABG | 2024-09-16T10:07:11Z | 61 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-09-14T11:00:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7879 | 0.3972 | 100 | 0.6982 |
| 0.7078 | 0.7944 | 200 | 0.6603 |
| 0.6728 | 1.1917 | 300 | 0.6420 |
| 0.646 | 1.5889 | 400 | 0.5979 |
| 0.6276 | 1.9861 | 500 | 0.5954 |
| 0.6213 | 2.3833 | 600 | 0.5852 |
| 0.6161 | 2.7805 | 700 | 0.5805 |
| 0.6043 | 3.1778 | 800 | 0.5701 |
| 0.6012 | 3.5750 | 900 | 0.5624 |
| 0.5951 | 3.9722 | 1000 | 0.5641 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step98000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T10:06:51Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T10:06:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prsdm/phi-2-medquad | prsdm | 2024-09-16T10:06:44Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"medical",
"question-answering",
"custom_code",
"en",
"dataset:prsdm/MedQuad-phi2-1k",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"doi:10.57967/hf/1695",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-01-26T18:34:08Z | ---
license: mit
datasets:
- prsdm/MedQuad-phi2-1k
language:
- en
base_model:
- microsoft/phi-2
pipeline_tag: question-answering
tags:
- medical
---
# Fine-tuning Phi-2 Model on Medical Dataset
<p>
😸<a href="https://github.com/prsdm/fine-tuning-llms/blob/main/Fine-tuning-phi-2-model.ipynb">GitHub</a> •📝<a href="https://medium.com/@prasadmahamulkar/fine-tuning-phi-2-a-step-by-step-guide-e672e7f1d009">Article</a> • Models & Datasets on: 🤗<a href="https://huggingface.co/prsdm">Hugging Face</a>
</p>

This repository provides a collection of Jupyter notebooks that demonstrate how to fine-tune large language models using various tools and techniques.
fine-tuning or instruction tuning is the process where the pre-trained model is further trained on the smaller dataset to adapt its knowledge for a specific task or domain. This process tweaks the model’s parameters to perform specific tasks. In fine-tuning, there are two methods:
### Supervised fine tuning (SFT):
In SFT, the model is trained on a labeled dataset. The labeled dataset typically contains examples of instruction (input) and response (output) pairs relevant to the task. In this process, the model learns how to respond to specific instructions.
### Reinforcement Learning from Human Feedback (RLHF):
In RLHF, the model interacts with users, generates responses, and receives feedback in the form of reinforcement signals. Basically, the model learns and improves its performance based on the feedback it receives.
### Details of each notebook:
| LLMs | Description | Dataset | Notebooks | Tuned Model |
|---------------------------|-------------|---------|-----------|-------------|
| [Phi-2](https://huggingface.co/microsoft/phi-2) | This model has been fine-tuned on a Medical dataset to answer questions related to diseases and symptoms. (used SFT method) | [Dataset](https://huggingface.co/datasets/prsdm/MedQuad-phi2-1k) | [](https://colab.research.google.com/github/prsdm/fine-tuning-llms/blob/main/Fine-tuning-phi-2-model.ipynb) | [phi-2-medquad](https://huggingface.co/prsdm/phi-2-medquad) |
| [llama-2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | This model has been fine-tuned on a dataset with human-generated prompts to answer questions related to general knowledge. (used SFT method) | [Dataset](https://huggingface.co/datasets/prsdm/finance-llama2-1k) | [](https://colab.research.google.com/github/prsdm/fine-tuning-llms/blob/main/Fine-tuning-llama-2-model.ipynb) | [llama-2-finance](https://huggingface.co/prsdm/llama-2-finance) |
|
mihofer/poca-SoccerTwos | mihofer | 2024-09-16T10:00:58Z | 34 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-09-16T09:57:38Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mihofer/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mohitsaxena2005/mistral_7b_code | mohitsaxena2005 | 2024-09-16T09:55:51Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T05:58:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aratako/calm3-22b-RP-v2-GGUF | Aratako | 2024-09-16T09:55:09Z | 987 | 4 | null | [
"gguf",
"roleplay",
"ja",
"base_model:Aratako/calm3-22b-RP-v2",
"base_model:quantized:Aratako/calm3-22b-RP-v2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T04:30:57Z | ---
license: cc-by-nc-sa-4.0
language:
- ja
tags:
- roleplay
base_model:
- Aratako/calm3-22b-RP-v2
---
# calm3-22b-RP-v2-GGUF
## 概要
[Aratako/calm3-22b-RP-v2](https://huggingface.co/Aratako/calm3-22b-RP-v2)の量子化済みGGUF版です。詳細は元モデルをご確認ください。
## ライセンス
本モデルの学習データにはOpenAI社のGPT-4o-miniの出力やAnthropic社のClaude 3.5 Sonnetの出力が含まれるため、これらの競合となるサービスには本モデルは利用できません。
そのため、本モデルは[CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ja)の元配布します。 |
Xu-Ouyang/pythia-2.8b-deduped-int2-step95000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T09:43:55Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T09:41:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmednoor00/distilbert-stock-tweet-sentiment-analysis | ahmednoor00 | 2024-09-16T09:43:22Z | 174 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-09-16T09:39:22Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6088
- Accuracy: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6873 | 1.0 | 1000 | 0.5699 | 0.7678 |
| 0.4813 | 2.0 | 2000 | 0.5673 | 0.7732 |
| 0.3664 | 3.0 | 3000 | 0.6088 | 0.7758 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
AmeerH/FPT_774M_FineTune_Flights_v3 | AmeerH | 2024-09-16T09:36:10Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-15T19:38:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Moriacrafter/Qwen1.5-4B-4bit_DepressionDetection_Reddit | Moriacrafter | 2024-09-16T09:33:28Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T09:29:21Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kheopss/kheops_compressed | kheopss | 2024-09-16T09:23:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:croissantllm/CroissantLLMBase",
"base_model:finetune:croissantllm/CroissantLLMBase",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-16T09:22:44Z | ---
base_model: croissantllm/CroissantLLMBase
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** kheopss
- **License:** apache-2.0
- **Finetuned from model :** croissantllm/CroissantLLMBase
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Niraya666/wmc_v2_vit_base_wm811k_cls_contra_learning_0916 | Niraya666 | 2024-09-16T08:59:14Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-16T02:39:30Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: wmc_v2_vit_base_wm811k_cls_contra_learning_0916
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wmc_v2_vit_base_wm811k_cls_contra_learning_0916
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0661
- Accuracy: 0.9768
- Precision: 0.9627
- Recall: 0.9551
- F1: 0.9585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1711 | 0.1697 | 100 | 0.6405 | 0.7559 | 0.6494 | 0.5790 | 0.5526 |
| 0.7143 | 0.3394 | 200 | 0.3116 | 0.8971 | 0.8478 | 0.7631 | 0.7576 |
| 0.584 | 0.5091 | 300 | 0.2060 | 0.9489 | 0.9137 | 0.8836 | 0.8940 |
| 0.4654 | 0.6788 | 400 | 0.1431 | 0.9603 | 0.9190 | 0.9289 | 0.9230 |
| 0.4465 | 0.8485 | 500 | 0.1176 | 0.9679 | 0.9458 | 0.9295 | 0.9373 |
| 0.3368 | 1.0182 | 600 | 0.1395 | 0.9550 | 0.9338 | 0.9244 | 0.9248 |
| 0.3741 | 1.1880 | 700 | 0.1541 | 0.9528 | 0.9287 | 0.9328 | 0.9269 |
| 0.3191 | 1.3577 | 800 | 0.1039 | 0.9697 | 0.9510 | 0.9453 | 0.9470 |
| 0.3354 | 1.5274 | 900 | 0.0952 | 0.9709 | 0.9530 | 0.9539 | 0.9529 |
| 0.3122 | 1.6971 | 1000 | 0.0799 | 0.9761 | 0.9456 | 0.9665 | 0.9556 |
| 0.295 | 1.8668 | 1100 | 0.0770 | 0.9758 | 0.9615 | 0.9534 | 0.9567 |
| 0.2993 | 2.0365 | 1200 | 0.0650 | 0.9794 | 0.9655 | 0.9597 | 0.9624 |
| 0.227 | 2.2062 | 1300 | 0.0717 | 0.9763 | 0.9598 | 0.9573 | 0.9584 |
| 0.2508 | 2.3759 | 1400 | 0.0653 | 0.9785 | 0.9605 | 0.9621 | 0.9613 |
| 0.3053 | 2.5456 | 1500 | 0.0629 | 0.9797 | 0.9623 | 0.9617 | 0.9620 |
| 0.2183 | 2.7153 | 1600 | 0.0676 | 0.9767 | 0.9597 | 0.9553 | 0.9572 |
| 0.219 | 2.8850 | 1700 | 0.0661 | 0.9768 | 0.9627 | 0.9551 | 0.9585 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step86000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T08:54:04Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T08:51:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gluon-8B-i1-GGUF | mradermacher | 2024-09-16T08:50:08Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"llama",
"llama-3.1",
"en",
"base_model:rmdhirr/Gluon-8B",
"base_model:quantized:rmdhirr/Gluon-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-16T01:22:58Z | ---
base_model: rmdhirr/Gluon-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- roleplay
- llama
- llama-3.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rmdhirr/Gluon-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gluon-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
YisusLn/poca-SoccerTwos | YisusLn | 2024-09-16T08:45:30Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-09-16T08:45:00Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YisusLn/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ziqian0925/sd-class-butterflies-64 | ziqian0925 | 2024-09-16T08:45:17Z | 46 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-09-16T08:44:59Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ziqian0925/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
minhquan24/Finetuned-T5-Small | minhquan24 | 2024-09-16T08:41:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-09-16T08:41:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ziqian0925/sd-class-butterflies-32 | ziqian0925 | 2024-09-16T08:34:19Z | 46 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-09-16T08:34:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ziqian0925/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step71000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T08:28:46Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T08:28:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kkasiviswanath/t5_base_qna_dataset_2_v1 | kkasiviswanath | 2024-09-16T08:26:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-09-16T08:26:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Q5_K_M-NanoLM-70M-Instruct-v1 | Felladrin | 2024-09-16T08:25:36Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"base_model:Mxode/NanoLM-70M-Instruct-v1",
"base_model:quantized:Mxode/NanoLM-70M-Instruct-v1",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text2text-generation | 2024-09-16T08:25:34Z | ---
base_model: Mxode/NanoLM-70M-Instruct-v1
language:
- en
license: gpl-3.0
pipeline_tag: text2text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Mxode/NanoLM-70M-Instruct-v1`](https://huggingface.co/Mxode/NanoLM-70M-Instruct-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mxode/NanoLM-70M-Instruct-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -c 2048
```
|
lightblue/kurage-vi | lightblue | 2024-09-16T08:24:10Z | 5 | 1 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"vi",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-14T03:50:16Z | ---
license: apache-2.0
language:
- vi
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Vietnamese.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Vietnamese questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-vi").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Vietnamese based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-ru | lightblue | 2024-09-16T08:23:33Z | 6 | 3 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ru",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-13T14:10:03Z | ---
license: apache-2.0
language:
- ru
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Russian.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Russian questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ru").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Russian based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-ko | lightblue | 2024-09-16T08:23:07Z | 5 | 4 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-13T10:48:33Z | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Swahili.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Swahili questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ko").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Swahili based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-multilingual | lightblue | 2024-09-16T08:21:57Z | 13 | 29 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"am",
"ar",
"bg",
"bn",
"cs",
"da",
"de",
"el",
"en",
"es",
"fa",
"fi",
"fr",
"gu",
"ha",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"kn",
"ko",
"lt",
"mr",
"nl",
"no",
"pl",
"pt",
"ro",
"ru",
"sk",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"uk",
"ur",
"vi",
"yo",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-16T03:27:49Z | ---
license: apache-2.0
language:
- am
- ar
- bg
- bn
- cs
- da
- de
- el
- en
- es
- fa
- fi
- fr
- gu
- ha
- hi
- hu
- id
- it
- ja
- jv
- kn
- ko
- lt
- mr
- nl
- no
- pl
- pt
- ro
- ru
- sk
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- yo
- zh
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in 44 languages.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in a given language based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
To define the language of the Q&A, you can optionally include a `<<Q&A Language>>` control code after the text
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Q&A Language>>
English
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-multilingual")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context, language_name = None):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
if language_name is not None:
str_inputs += "\n\n<<Q&A Language>>\n{language_name}"
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
Felladrin/gguf-Q5_K_M-NanoLM-1B-Instruct-v2 | Felladrin | 2024-09-16T08:20:53Z | 109 | 1 | null | [
"gguf",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"dataset:Mxode/Magpie-Pro-10K-GPT4o-mini",
"base_model:Mxode/NanoLM-1B-Instruct-v2",
"base_model:quantized:Mxode/NanoLM-1B-Instruct-v2",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text2text-generation | 2024-09-16T08:20:23Z | ---
base_model: Mxode/NanoLM-1B-Instruct-v2
datasets:
- Mxode/Magpie-Pro-10K-GPT4o-mini
language:
- en
license: gpl-3.0
pipeline_tag: text2text-generation
tags:
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`Mxode/NanoLM-1B-Instruct-v2`](https://huggingface.co/Mxode/NanoLM-1B-Instruct-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mxode/NanoLM-1B-Instruct-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
|
lightblue/kurage-ar | lightblue | 2024-09-16T08:20:29Z | 11 | 1 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ar",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-13T05:53:59Z | ---
license: apache-2.0
language:
- ar
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Arabic.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Arabic questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ar").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Arabic based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-ja | lightblue | 2024-09-16T08:12:19Z | 13 | 6 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ja",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-11T03:39:10Z | ---
license: apache-2.0
language:
- ja
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue).
This version of the model has been trained to perform RAG in Japanese.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2). A multilingual model is coming soon!
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"日銀の中川順子審議委員は11日、実質金利は現在きわめて低い水準にあるとした上で、先行き日銀の経済・物価見通しが実現していくとすれば、物価目標実現の観点から金融緩和の度合いを調整していくことになると述べた。",
"7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"8月の円高局面で、日本の機関投資家が過去最大の対外証券投資に動いていたことが、外為市場で話題となっている。"
]
question = "現在、日本の第一次所得収支はいくらですか?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4兆円
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Japanese based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-ja")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model. |
Xu-Ouyang/pythia-2.8b-deduped-int2-step57000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T08:06:36Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T08:06:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Donnager-70B-v1-GGUF | mradermacher | 2024-09-16T07:55:13Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Donnager-70B-v1",
"base_model:quantized:TheDrummer/Donnager-70B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T02:40:50Z | ---
base_model: TheDrummer/Donnager-70B-v1
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TheDrummer/Donnager-70B-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Donnager-70B-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Donnager-70B-v1-GGUF/resolve/main/Donnager-70B-v1.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Sudhan16/Meta-Llama-3.1-8B-Instruct-bnb-4bit | Sudhan16 | 2024-09-16T07:53:08Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-09-16T07:50:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
devkyle/whisper-tiny-10dp | devkyle | 2024-09-16T07:52:23Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-09-14T23:21:32Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-akan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-akan
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1400
- Wer: 49.1687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3923 | 5.0 | 250 | 0.8148 | 61.9777 |
| 0.1045 | 10.0 | 500 | 0.9445 | 53.9829 |
| 0.0414 | 15.0 | 750 | 1.0241 | 49.9639 |
| 0.0187 | 20.0 | 1000 | 1.0862 | 51.7999 |
| 0.0089 | 25.0 | 1250 | 1.1386 | 51.1927 |
| 0.0025 | 30.0 | 1500 | 1.1284 | 49.1832 |
| 0.0004 | 35.0 | 1750 | 1.1355 | 48.7061 |
| 0.0004 | 40.0 | 2000 | 1.1400 | 49.1687 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Vincentwrx/llama31 | Vincentwrx | 2024-09-16T07:38:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"region:us"
] | null | 2024-09-14T11:27:51Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf | RichardErkhov | 2024-09-16T07:37:02Z | 13 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-16T01:41:00Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
POLAR-14B_4.3_very_big_sft - GGUF
- Model creator: https://huggingface.co/spow12/
- Original model: https://huggingface.co/spow12/POLAR-14B_4.3_very_big_sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [POLAR-14B_4.3_very_big_sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q2_K.gguf) | Q2_K | 4.92GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_XS.gguf) | IQ3_XS | 5.47GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_S.gguf) | IQ3_S | 5.77GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_S.gguf) | Q3_K_S | 5.74GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_M.gguf) | IQ3_M | 5.97GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K.gguf) | Q3_K | 6.4GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_M.gguf) | Q3_K_M | 6.4GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_L.gguf) | Q3_K_L | 6.97GB |
| [POLAR-14B_4.3_very_big_sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ4_XS.gguf) | IQ4_XS | 7.18GB |
| [POLAR-14B_4.3_very_big_sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_0.gguf) | Q4_0 | 7.48GB |
| [POLAR-14B_4.3_very_big_sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ4_NL.gguf) | IQ4_NL | 7.57GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K_S.gguf) | Q4_K_S | 7.54GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K.gguf) | Q4_K | 7.97GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K_M.gguf) | Q4_K_M | 7.97GB |
| [POLAR-14B_4.3_very_big_sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_1.gguf) | Q4_1 | 8.3GB |
| [POLAR-14B_4.3_very_big_sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_0.gguf) | Q5_0 | 9.12GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K_S.gguf) | Q5_K_S | 9.12GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K.gguf) | Q5_K | 9.37GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K_M.gguf) | Q5_K_M | 9.37GB |
| [POLAR-14B_4.3_very_big_sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_1.gguf) | Q5_1 | 9.94GB |
| [POLAR-14B_4.3_very_big_sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q6_K.gguf) | Q6_K | 10.87GB |
| [POLAR-14B_4.3_very_big_sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q8_0.gguf) | Q8_0 | 14.07GB |
Original model description:
---
library_name: transformers
license: cc-by-nc-4.0
language:
- ko
- en
---
# spow12/POLAR-14B_4.3_very_big_sft
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a Supervised fine-tuned version of [x2bee/POLAR-14B-v0.2](https://huggingface.co/x2bee/POLAR-14B-v0.2) with DeepSpeed and trl for korean.
### Trained Data
- Trained with public data and private data and Generated data (about 50k)
### Usage
```python
from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = 'spow12/POLAR-14B_4.3_very_big_sft'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map='auto',
)
model.eval()
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto')
streamer = TextStreamer(tokenizer)
generation_configs = dict(
max_new_tokens=2048,
num_return_sequences=1,
temperature=0.1,
# early_stopping=True,
repetition_penalty=1.2,
num_beams=1,
do_sample=True,
top_k=20,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
streamer=streamer
)
sys_message = """당신은 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답해야합니다.
사용자가 제공하는 정보를 세심하게 분석하여 사용자의 의도를 신속하게 파악하고 그에 따라 답변을 생성해야합니다.
항상 매우 자연스러운 한국어로 응답하세요."""
message = [
{
'role': "system",
'content': sys_message
},
{
'role': 'user',
'content': "현재의 경제상황에 대해 어떻게 생각해?."
}
]
conversation = pipe(message, **generation_configs)
conversation[-1]
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
Here is Original Readme.md
|
SrikrishnaIyer/RoBERTa_WML_distill-Babylm-100M-2024 | SrikrishnaIyer | 2024-09-16T07:32:05Z | 2,557 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2024-09-13T02:29:52Z | # When Babies Teach Babies: Peer Knowledge Sharing Beats Teacher-Guided Distillation in Small-Data LMs
This model uses weighted mutual learning (WML) to find and train distilled versions of a teacher model using peer-to-peer learning. It builds on the approach described in "Weighted Mutual Learning with Diversity-Driven Model Compression" (Zhang et al., 2022), with some key differences.
## Approach
### Peer Model Initialization
Unlike the original paper which uses differential pruning of the teacher model, we use Bayesian optimization to initialize smaller peer models:
- For example, if `num_peers = 4`, target parameter counts are N/2, N/3, N/4, N/5 (where N is the teacher model size)
- Optimize `num_layers`, `attention_heads`, and `hidden_size` to reach target parameter counts
- This ensures diversity while also reducing model size
The key difference is that pruning (as used in the original paper) only masks parameters, while our distillation approach actually reduces the model architecture size.
### Weighted Mutual Learning
We use the bi-level optimization method from the paper to minimize the WML loss and ensemble loss:
1. Inner loop: Train peer models using weighted knowledge distillation loss (cross entropy + KL divergence)
2. Outer loop: Update peer weights using mirror gradient descent to optimize ensemble performance (ensemble loss)
This allows the framework to dynamically adjust the importance of each peer during training.
## Hyperparameters of the champion peer model
| Hyperparameter | Value |
|----------------|-------|
| weight_decay | 0.1 |
| beta1 | 0.9 |
| beta2 | 0.95 |
| bayesian_init_points | 10 |
| bayesian_n_iter | 100 |
| grad_clip | 1.0 |
| prune_importance | 'l1' |
| layer_bound | 0.9 |
| batch_size | 3 |
| block_size | 512 |
| num_epochs | 100 |
| loss_alpha | 0.5 |
| num_batches | 60 |
| warmup_iters | 5 |
| learning_rate | 0.05 |
| lr_decay_iters | 200 |
| min_lr | 0.005 |
| enable_early_stopping | True |
## References
Zhang, M., Wang, L., Campos, D., Huang, W., Guo, C., & Yang, B. (2022). Weighted Mutual Learning with Diversity-Driven Model Compression. Advances in Neural Information Processing Systems, 35.
|
erkhem-gantulga/whisper-medium-mn | erkhem-gantulga | 2024-09-16T07:31:08Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"mn",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:google/fleurs",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-17T00:51:01Z | ---
language:
- mn
base_model: openai/whisper-medium
library_name: transformers
datasets:
- mozilla-foundation/common_voice_17_0
- google/fleurs
tags:
- audio
- automatic-speech-recognition
widget:
- example_title: Common Voice sample 1
src: sample1.flac
- example_title: Common Voice sample 2
src: sample2.flac
model-index:
- name: whisper-medium-mn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: common_voice_17_0
config: mn
split: test
args:
language: mn
metrics:
- name: Test WER
type: wer
value: 12.9580
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Mn - Erkhembayar Gantulga
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 17.0 and Google Fleurs datasets.
It achieves the following results on the evaluation set:
- Loss: 0.1083
- Wer: 12.9580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Datasets used for training:
- [Common Voice 17.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)
- [Google Fleurs](https://huggingface.co/datasets/google/fleurs)
For training, combined Common Voice 17.0 and Google Fleurs datasets:
```
from datasets import load_dataset, DatasetDict, concatenate_datasets
from datasets import Audio
common_voice = DatasetDict()
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_17_0", "mn", split="train+validation+validated", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_17_0", "mn", split="test", use_auth_token=True)
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
common_voice = common_voice.remove_columns(
["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes", "variant"]
)
google_fleurs = DatasetDict()
google_fleurs["train"] = load_dataset("google/fleurs", "mn_mn", split="train+validation", use_auth_token=True)
google_fleurs["test"] = load_dataset("google/fleurs", "mn_mn", split="test", use_auth_token=True)
google_fleurs = google_fleurs.remove_columns(
["id", "num_samples", "path", "raw_transcription", "gender", "lang_id", "language", "lang_group_id"]
)
google_fleurs = google_fleurs.rename_column("transcription", "sentence")
dataset = DatasetDict()
dataset["train"] = concatenate_datasets([common_voice["train"], google_fleurs["train"]])
dataset["test"] = concatenate_datasets([common_voice["test"], google_fleurs["test"]])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2986 | 0.4912 | 500 | 0.3557 | 40.1515 |
| 0.2012 | 0.9823 | 1000 | 0.2310 | 28.3512 |
| 0.099 | 1.4735 | 1500 | 0.1864 | 23.4453 |
| 0.0733 | 1.9646 | 2000 | 0.1405 | 18.3024 |
| 0.0231 | 2.4558 | 2500 | 0.1308 | 16.5645 |
| 0.0191 | 2.9470 | 3000 | 0.1155 | 14.5569 |
| 0.0059 | 3.4381 | 3500 | 0.1122 | 13.4728 |
| 0.006 | 3.9293 | 4000 | 0.1083 | 12.9580 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
iantc104/real_occluded_key_insertion_3arms_zed_static_act | iantc104 | 2024-09-16T07:27:53Z | 12 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-09-16T07:27:26Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
sandeepaffine/meta-llama-Llama-2-7b-chat-hf-8bit-lmhead-embedtoken-keywords-v17-cp2-merged | sandeepaffine | 2024-09-16T07:26:33Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-09-16T07:18:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-2.8b-deduped-int2-step36000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T07:22:09Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-09-16T07:21:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits