modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yofitofi/shlomper
|
yofitofi
| 2023-10-24T08:52:51Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"poopik",
"text-classification",
"he",
"dataset:Open-Orca/OpenOrca",
"license:other",
"region:us"
] |
text-classification
| 2023-10-18T12:19:17Z |
---
license: other
license_name: shlomper-license
license_link: LICENSE
datasets:
- Open-Orca/OpenOrca
language:
- he
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- code
- poopik
---
|
Taeyeun72/whisper-small-emergency
|
Taeyeun72
| 2023-10-24T08:52:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:arrow",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-23T14:05:45Z |
---
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- arrow
metrics:
- wer
model-index:
- name: whisper-small-emergency
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: whisper-small-kor
type: arrow
config: default
split: train
args: 'config: ko, split: valid'
metrics:
- name: Wer
type: wer
value: 21.789546555901694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-emergency
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the whisper-small-kor dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2212
- Wer: 21.7895
- Cer: 10.3463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.2586 | 0.02 | 100 | 2.0061 | 38.3870 | 19.3958 |
| 0.9821 | 0.05 | 200 | 0.8927 | 37.3486 | 16.9619 |
| 0.7244 | 0.07 | 300 | 0.6577 | 32.9353 | 15.2939 |
| 0.505 | 0.1 | 400 | 0.4395 | 33.8006 | 16.8397 |
| 0.4397 | 0.12 | 500 | 0.3908 | 29.2489 | 13.3393 |
| 0.3602 | 0.15 | 600 | 0.3710 | 27.7259 | 12.8741 |
| 0.4321 | 0.17 | 700 | 0.3558 | 27.7778 | 12.8224 |
| 0.3979 | 0.19 | 800 | 0.3504 | 27.5528 | 12.6063 |
| 0.2614 | 0.22 | 900 | 0.3434 | 28.2451 | 13.5601 |
| 0.3725 | 0.24 | 1000 | 0.3362 | 26.8086 | 12.8177 |
| 0.4098 | 0.27 | 1100 | 0.3329 | 26.8086 | 13.1466 |
| 0.3083 | 0.29 | 1200 | 0.3240 | 25.6663 | 12.0566 |
| 0.324 | 0.32 | 1300 | 0.3169 | 24.7490 | 11.3659 |
| 0.3437 | 0.34 | 1400 | 0.3090 | 24.2471 | 10.9383 |
| 0.3719 | 0.36 | 1500 | 0.3064 | 24.4548 | 11.2155 |
| 0.3563 | 0.39 | 1600 | 0.3013 | 24.0222 | 11.0887 |
| 0.3493 | 0.41 | 1700 | 0.3036 | 24.1606 | 11.1779 |
| 0.3132 | 0.44 | 1800 | 0.3011 | 24.0741 | 11.1685 |
| 0.3024 | 0.46 | 1900 | 0.2920 | 24.4202 | 11.2014 |
| 0.2982 | 0.49 | 2000 | 0.2873 | 22.9664 | 10.4262 |
| 0.3309 | 0.51 | 2100 | 0.2880 | 23.3991 | 10.8208 |
| 0.3209 | 0.53 | 2200 | 0.2811 | 21.9280 | 10.2288 |
| 0.2778 | 0.56 | 2300 | 0.2883 | 22.6895 | 10.5060 |
| 0.3391 | 0.58 | 2400 | 0.2796 | 21.9280 | 10.1818 |
| 0.3261 | 0.61 | 2500 | 0.2757 | 22.3607 | 10.1865 |
| 0.2711 | 0.63 | 2600 | 0.2746 | 22.9491 | 10.4356 |
| 0.2723 | 0.66 | 2700 | 0.2708 | 22.3088 | 10.5624 |
| 0.3152 | 0.68 | 2800 | 0.2681 | 21.8934 | 10.0127 |
| 0.248 | 0.7 | 2900 | 0.2679 | 22.2568 | 10.0644 |
| 0.2354 | 0.73 | 3000 | 0.2665 | 21.7203 | 9.8576 |
| 0.2828 | 0.75 | 3100 | 0.2628 | 21.4261 | 9.9422 |
| 0.2759 | 0.78 | 3200 | 0.2652 | 21.2703 | 9.8623 |
| 0.2904 | 0.8 | 3300 | 0.2606 | 21.2876 | 9.8388 |
| 0.2844 | 0.83 | 3400 | 0.2600 | 21.8761 | 10.0362 |
| 0.2815 | 0.85 | 3500 | 0.2554 | 20.9069 | 9.5992 |
| 0.2713 | 0.87 | 3600 | 0.2573 | 20.8550 | 9.5334 |
| 0.2748 | 0.9 | 3700 | 0.2566 | 21.5126 | 9.8811 |
| 0.2447 | 0.92 | 3800 | 0.2526 | 20.5088 | 9.3455 |
| 0.3255 | 0.95 | 3900 | 0.2517 | 20.3358 | 11.3048 |
| 0.2786 | 0.97 | 4000 | 0.2489 | 20.8030 | 9.5898 |
| 0.245 | 1.0 | 4100 | 0.2523 | 21.4607 | 9.7167 |
| 0.1655 | 1.02 | 4200 | 0.2470 | 20.4396 | 9.5287 |
| 0.1898 | 1.04 | 4300 | 0.2422 | 19.9550 | 9.0871 |
| 0.1394 | 1.07 | 4400 | 0.2429 | 20.0242 | 9.2750 |
| 0.1592 | 1.09 | 4500 | 0.2433 | 19.9896 | 9.0824 |
| 0.1542 | 1.12 | 4600 | 0.2428 | 20.2492 | 9.3126 |
| 0.1296 | 1.14 | 4700 | 0.2437 | 19.4531 | 8.9038 |
| 0.1477 | 1.17 | 4800 | 0.2432 | 19.7300 | 11.0605 |
| 0.1551 | 1.19 | 4900 | 0.2436 | 20.0762 | 11.3236 |
| 0.1581 | 1.21 | 5000 | 0.2435 | 19.7992 | 10.9994 |
| 0.2033 | 1.24 | 5100 | 0.2434 | 19.8339 | 9.1763 |
| 0.1444 | 1.26 | 5200 | 0.2399 | 19.8165 | 10.9806 |
| 0.1543 | 1.29 | 5300 | 0.2371 | 19.1762 | 10.8913 |
| 0.1735 | 1.31 | 5400 | 0.2350 | 19.4185 | 9.0166 |
| 0.1552 | 1.34 | 5500 | 0.2363 | 19.0897 | 8.8098 |
| 0.1495 | 1.36 | 5600 | 0.2332 | 19.1070 | 8.8145 |
| 0.1636 | 1.38 | 5700 | 0.2350 | 18.6051 | 10.5718 |
| 0.1827 | 1.41 | 5800 | 0.2333 | 18.4493 | 8.5091 |
| 0.1464 | 1.43 | 5900 | 0.2344 | 19.2454 | 8.8850 |
| 0.1999 | 1.46 | 6000 | 0.2325 | 23.1222 | 10.9900 |
| 0.1547 | 1.48 | 6100 | 0.2318 | 19.3839 | 8.8709 |
| 0.1296 | 1.51 | 6200 | 0.2339 | 19.3146 | 8.9085 |
| 0.1535 | 1.53 | 6300 | 0.2317 | 22.5684 | 10.8302 |
| 0.1467 | 1.55 | 6400 | 0.2310 | 19.1070 | 8.7958 |
| 0.1709 | 1.58 | 6500 | 0.2338 | 18.9685 | 8.7441 |
| 0.1359 | 1.6 | 6600 | 0.2295 | 19.0550 | 8.6548 |
| 0.1611 | 1.63 | 6700 | 0.2293 | 18.5877 | 8.5608 |
| 0.1232 | 1.65 | 6800 | 0.2309 | 19.4012 | 8.9273 |
| 0.1692 | 1.68 | 6900 | 0.2288 | 18.6224 | 8.8756 |
| 0.1544 | 1.7 | 7000 | 0.2265 | 18.3454 | 8.5467 |
| 0.1282 | 1.72 | 7100 | 0.2256 | 18.6570 | 8.6642 |
| 0.1414 | 1.75 | 7200 | 0.2258 | 22.1011 | 10.2993 |
| 0.157 | 1.77 | 7300 | 0.2259 | 18.8474 | 8.6501 |
| 0.1592 | 1.8 | 7400 | 0.2249 | 18.6570 | 8.5702 |
| 0.0998 | 1.82 | 7500 | 0.2246 | 18.8127 | 8.6125 |
| 0.1486 | 1.85 | 7600 | 0.2225 | 18.3281 | 8.3024 |
| 0.1336 | 1.87 | 7700 | 0.2221 | 18.5704 | 8.4387 |
| 0.1388 | 1.9 | 7800 | 0.2222 | 18.5531 | 8.5044 |
| 0.1341 | 1.92 | 7900 | 0.2212 | 22.0665 | 10.4215 |
| 0.1548 | 1.94 | 8000 | 0.2215 | 21.8588 | 10.3275 |
| 0.1276 | 1.97 | 8100 | 0.2182 | 21.8069 | 10.3040 |
| 0.1567 | 1.99 | 8200 | 0.2200 | 18.1031 | 8.3541 |
| 0.1054 | 2.02 | 8300 | 0.2201 | 21.5646 | 10.2335 |
| 0.0793 | 2.04 | 8400 | 0.2219 | 21.1838 | 10.1161 |
| 0.0944 | 2.07 | 8500 | 0.2225 | 21.5819 | 10.3510 |
| 0.0824 | 2.09 | 8600 | 0.2230 | 21.7203 | 10.2476 |
| 0.0863 | 2.11 | 8700 | 0.2222 | 21.6684 | 10.2241 |
| 0.1102 | 2.14 | 8800 | 0.2233 | 21.5819 | 10.3228 |
| 0.0852 | 2.16 | 8900 | 0.2226 | 21.8588 | 10.2946 |
| 0.0796 | 2.19 | 9000 | 0.2227 | 21.9626 | 10.3651 |
| 0.1023 | 2.21 | 9100 | 0.2223 | 21.7722 | 10.4309 |
| 0.08 | 2.24 | 9200 | 0.2216 | 21.4780 | 10.2664 |
| 0.0703 | 2.26 | 9300 | 0.2218 | 21.5992 | 10.2429 |
| 0.0923 | 2.28 | 9400 | 0.2212 | 21.4434 | 10.2006 |
| 0.0694 | 2.31 | 9500 | 0.2217 | 21.4780 | 10.2194 |
| 0.1033 | 2.33 | 9600 | 0.2216 | 21.5126 | 10.2382 |
| 0.0913 | 2.36 | 9700 | 0.2214 | 21.5299 | 10.2194 |
| 0.0882 | 2.38 | 9800 | 0.2212 | 21.7376 | 10.2758 |
| 0.0852 | 2.41 | 9900 | 0.2212 | 21.7203 | 10.3087 |
| 0.0862 | 2.43 | 10000 | 0.2212 | 21.7895 | 10.3463 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Erfan/mT5-small_Farsi_Title_Generator
|
Erfan
| 2023-10-24T08:50:09Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"Title-Generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- Title-Generation
metrics:
- ROUGH
---
|
aisha1fahad/output
|
aisha1fahad
| 2023-10-24T08:48:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-24T07:43:26Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 1.18.3
- Tokenizers 0.14.1
|
Jiffies/my_awesome_model
|
Jiffies
| 2023-10-24T08:45:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-24T08:08:08Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.68048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6002
- Accuracy: 0.6805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6229 | 1.0 | 3125 | 0.6002 | 0.6805 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.13.2
|
Mihir1108/aspire-invoice-extractor
|
Mihir1108
| 2023-10-24T08:27:39Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-24T07:07:41Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
rawatnikhil857/movieRec-bloom-1b7
|
rawatnikhil857
| 2023-10-24T07:58:11Z | 3 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b7",
"base_model:adapter:bigscience/bloom-1b7",
"region:us"
] | null | 2023-10-24T07:58:07Z |
---
library_name: peft
base_model: bigscience/bloom-1b7
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
zrjin/icefall-asr-aishell-zipformer-large-2023-10-24
|
zrjin
| 2023-10-24T07:54:19Z | 0 | 0 | null |
[
"tensorboard",
"onnx",
"region:us"
] | null | 2023-10-24T02:50:37Z |
See https://github.com/k2-fsa/icefall/pull/1216 for more information.
|
sanguineAlacrity/Taxi-v3
|
sanguineAlacrity
| 2023-10-24T07:52:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T07:52:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sanguineAlacrity/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sanguineAlacrity/q-FrozenLake-v1-4x4-noSlippery
|
sanguineAlacrity
| 2023-10-24T07:45:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T07:45:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sanguineAlacrity/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akter-sust/rl_zoo3
|
akter-sust
| 2023-10-24T07:39:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T06:26:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 579.00 +/- 90.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akter-sust -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga akter-sust -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga akter-sust
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
RachidAR/Marx-3b-v3-gguf
|
RachidAR
| 2023-10-24T07:33:05Z | 31 | 3 | null |
[
"gguf",
"en",
"dataset:acrastt/EverythingLM-V3-ShareGPT",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-24T06:31:50Z |
---
license: cc-by-sa-4.0
datasets:
- acrastt/EverythingLM-V3-ShareGPT
language:
- en
---
Experimental!
https://github.com/ggerganov/llama.cpp/pull/3586
|
GuysTrans/bart-large-finetuned-pubmed-finetuned-eheath-pubmed
|
GuysTrans
| 2023-10-24T07:26:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:Kevincp560/bart-large-finetuned-pubmed",
"base_model:finetune:Kevincp560/bart-large-finetuned-pubmed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-23T14:36:30Z |
---
license: apache-2.0
base_model: Kevincp560/bart-large-finetuned-pubmed
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-finetuned-pubmed-finetuned-eheath-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-pubmed-finetuned-eheath-pubmed
This model is a fine-tuned version of [Kevincp560/bart-large-finetuned-pubmed](https://huggingface.co/Kevincp560/bart-large-finetuned-pubmed) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6994
- Rouge1: 8.4409
- Rouge2: 2.0617
- Rougel: 6.7452
- Rougelsum: 8.1365
- Gen Len: 19.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.3576 | 1.0 | 22880 | 2.6994 | 8.4409 | 2.0617 | 6.7452 | 8.1365 | 19.9994 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
krakenalt/saiga2_70b_gguf
|
krakenalt
| 2023-10-24T07:26:15Z | 2 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-10-23T18:48:59Z |
Llama.cpp compatible versions of an original [70B model](https://huggingface.co/IlyaGusev/saiga2_70b_lora).
Quantization q5_k_m of model: [saiga2_70b](https://huggingface.co/IlyaGusev/saiga2_70b_gguf)
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
|
KGsteven/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
KGsteven
| 2023-10-24T07:16:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T07:16:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
dvijay/llama-2-7b-chat-hf-guanaco-1k
|
dvijay
| 2023-10-24T07:12:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T04:54:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
FunAudio/paraformer-zh
|
FunAudio
| 2023-10-24T07:04:10Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-10-24T07:04:10Z |
---
license: other
license_name: model-license
license_link: LICENSE
---
|
s3nh/akjindal53244-Arithmo-Mistral-7B-GGUF
|
s3nh
| 2023-10-24T06:53:44Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-24T06:47:11Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
na20b039/bert-finetuned-ner
|
na20b039
| 2023-10-24T06:35:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-24T06:22:22Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0732
- Precision: 0.9366
- Recall: 0.9525
- F1: 0.9445
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0273 | 1.0 | 1756 | 0.0761 | 0.9255 | 0.9424 | 0.9339 | 0.9849 |
| 0.0183 | 2.0 | 3512 | 0.0672 | 0.9374 | 0.9505 | 0.9439 | 0.9865 |
| 0.0077 | 3.0 | 5268 | 0.0732 | 0.9366 | 0.9525 | 0.9445 | 0.9867 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Tokenizers 0.14.1
|
jnsulee/ko_math_tokenizer
|
jnsulee
| 2023-10-24T06:30:19Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2022-12-24T06:29:13Z |
---
license: cc-by-4.0
---
# Description: 한국어 수학교육 텍스트로 학습한 토크나이저
- 학습 데이터 저작권: 두산동아, 미래엔, 비상에듀, 지학사, 한국교육과정평가원
- 모델 체크포인트: skt/kogpt2-base-v2
- 개인 연구 목적으로 사용, 개발중인 토크나이저입니다. 학습 데이터 저작권이 저에게 있지 않으므로, 무단 사용을 자제해주세요.
# What to do
- 수학교육 분야의 여러 tasks에서 사용가능한 언어 모델을 개발중입니다.
# How to use
```python
>>> from transformers import AutoTokenizer
>>> old_tokenizer = AutoTokenizer.from_pretrained('skt/kogpt2-base-v2')
>>> tokenizer = AutoTokenizer.from_pretrained('jnsulee/ko_math_tokenizer')
example = "다항식의 덧셈은 동류항끼리 모아서 정리한다.이때 두 다항식의 차 A-B는 A에 B의 각 항의 부호를 바꾼 -B를 더한 것과 같다. 즉, A-B=A+(-B)이다."
old_tokenizer.tokenize(example)
#['▁다', '항', '식의', '▁덧', '셈', '은', '▁동', '류', '항', '끼리', '▁모아서', '▁정리한', '다.', '이', '때', '▁두', '▁다', '항', '식의', '▁차', '▁A', '-B', '는', '▁A', '에', '▁B', '의', '▁각', '▁항의', '▁부', '호를', '▁바꾼', '▁-', 'B', '를', '▁더한', '▁것과', '▁같다.', '▁\n', '즉', ',', '▁A', '-B', '=', 'A', '+', '(', '-B', ')이다.']
tokenizer.tokenize(example)
#['▁다항식', '의', '▁덧셈', '은', '▁동류항', '끼리', '▁모아서', '▁정리', '한다', '.', '이', '때', '▁두', '▁다항식', '의', '▁차', '▁A', '-', 'B', '는', '▁A', '에', '▁B', '의', '▁각', '▁항의', '▁부호', '를', '▁바꾼', '▁-', 'B', '를', '▁더한', '▁것', '과', '▁같', '다', '.', '▁\n즉', ',', '▁A', '-', 'B', '=', 'A', '+', '(', '-', 'B', ')', '이다', '.']
```
|
yeye776/t5-large-finetuned-multi_v2
|
yeye776
| 2023-10-24T06:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-large",
"base_model:finetune:paust/pko-t5-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-23T06:02:08Z |
---
license: cc-by-4.0
base_model: paust/pko-t5-large
tags:
- generated_from_trainer
model-index:
- name: t5-large-finetuned-multi_v2
results: []
widget:
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :내일 용인에서 상차하고 다시 용인에서 하차하는 화물 추천해줘"
example_title: "화물추천"
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :내일 오후 우면동 날씨"
example_title: "날씨예보"
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :전기충전소 안내해줘"
example_title: "장소안내"
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :경부고속도로 상황 알려줘"
example_title: "일상대화"
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :하차 담당자에게 문의해줘"
example_title: "전화연결"
- text: "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :진행해줘"
example_title: "긍부정"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetuned-multi_v2
This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset.
## Model description
|의도 |개체 |
|:------:|:-----------|
| 일상대화 | |
| 전화연결 |대상|
| 장소안내 |장소, 대상|
| 날씨예보 |날짜, 장소, 대상, 시간, 조건|
| 화물추천 |날짜, 시간, 상차, 하차, 기준|
| 긍부정 |긍정, 부정, 중립|
*대상 : 상차지/하차지
## How to use
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/yeye776/t5-large-finetuned-multi_v2"
headers = {"Authorization": "Bearer hf_key"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
prompt = "브로캐리에 관련된 이용자의 대화인데 분류 및 인식 해줘! :"
input = "내일 심야 상차지가 분당인 화물 추천해줘"
output = query({
"inputs": prompt+input,
"options":{"wait_for_model":True}
})
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 8
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ShashiVish/llama-7-int4-r512-cover-letter
|
ShashiVish
| 2023-10-24T06:18:45Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-10-24T06:17:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
Sid2531/trff
|
Sid2531
| 2023-10-24T06:18:14Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-10-24T06:18:14Z |
---
license: bigscience-openrail-m
---
|
digiplay/VersaMix_base_diffusers
|
digiplay
| 2023-10-24T05:55:37Z | 438 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-13T21:17:37Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/20865?modelVersionId=24838
Original Author's DEMO image :

|
mlninad/mistral-summarizer-finetuned
|
mlninad
| 2023-10-24T05:54:19Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-24T05:42:22Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-summarizer-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-summarizer-finetuned
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TheMightyNostril/ppo-Huggy
|
TheMightyNostril
| 2023-10-24T05:14:44Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-24T05:14:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: TheMightyNostril/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pedrowww/aindanfoi
|
pedrowww
| 2023-10-24T05:12:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T05:12:26Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: aindanfoi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.00 +/- 29.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
willadamskeane/autotrain-over-under-96986146760
|
willadamskeane
| 2023-10-24T05:04:28Z | 1 | 0 |
transformers
|
[
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:willadamskeane/autotrain-data-over-under",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
tabular-classification
| 2023-10-24T05:03:04Z |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- willadamskeane/autotrain-data-over-under
co2_eq_emissions:
emissions: 0.04557412368301007
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 96986146760
- CO2 Emissions (in grams): 0.0456
## Validation Metrics
- Loss: 0.461
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
```
|
hobbesleland/mistral-viggo-finetune
|
hobbesleland
| 2023-10-24T05:02:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-10-24T04:55:04Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- generated_from_trainer
model-index:
- name: mistral-viggo-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-viggo-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4563 | 0.01 | 50 | 0.7277 |
| 0.5873 | 0.01 | 100 | 0.5276 |
| 0.4951 | 0.02 | 150 | 0.4817 |
| 0.4645 | 0.02 | 200 | 0.4664 |
| 0.4682 | 0.03 | 250 | 0.4541 |
| 0.4569 | 0.03 | 300 | 0.4447 |
| 0.4428 | 0.04 | 350 | 0.4362 |
| 0.4184 | 0.04 | 400 | 0.4326 |
| 0.4174 | 0.05 | 450 | 0.4280 |
| 0.4122 | 0.05 | 500 | 0.4242 |
| 0.4176 | 0.06 | 550 | 0.4228 |
| 0.4105 | 0.06 | 600 | 0.4175 |
| 0.4103 | 0.07 | 650 | 0.4154 |
| 0.4113 | 0.07 | 700 | 0.4133 |
| 0.3979 | 0.08 | 750 | 0.4118 |
| 0.3895 | 0.08 | 800 | 0.4109 |
| 0.4088 | 0.09 | 850 | 0.4092 |
| 0.399 | 0.09 | 900 | 0.4082 |
| 0.4001 | 0.1 | 950 | 0.4075 |
| 0.4067 | 0.1 | 1000 | 0.4072 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
davidkim205/komt-mistral-7b-v1-gguf
|
davidkim205
| 2023-10-24T04:55:11Z | 102 | 10 | null |
[
"gguf",
"text-generation",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-24T04:04:26Z |
---
language:
- en
- ko
pipeline_tag: text-generation
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0
## Training
Refer https://github.com/davidkim205/komt
## usage
```
cd llama.cpp
make -j && ./main -m ./models/komt-mistral-7b-v1/ggml-model-q4_1.gguf -p "[INST]인삼은 어떤 효과가 있는가요? [/INST]"
```
output
```
[INST]인삼은 어떤 효과가 있는가요? [/INST]인삼에는 약 600개의 물질이 포함되어 있습니다. 그 중에서도 많은 연구들을 통해 효능을 확인한 것으로 알려진 3가지는 아래와 같습니다.
1. 혈압 조절: 각종 실험에서 인삼이 혈압을 조절하는데 효과를 나타냈으며, 특히 중국의 한 연구자들은 인삼을 사용해 40%의 혈압 감소를 보였습니다.
2. 위장 개선: 인삼은 흉터, 통증 등으로 고통받는 위장 질환을 일부나마 개선할 수 있는데, 이는 각종 실험들에서 확인된 것입니다.
3. 면역 강화: 인삼은 면역체계를 강화시키는데 효과가 있으며, 국내에서도 2014년부터는 식약처의 의약용품 수출증명제에 대한 최종적인 평가로 사용되고 있습니다.
위와 같은 효능을 갖춘 인삼은 많이 사용하는 건강식품의 원료로도 활용됩니다. [end of text]
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
| --------------------------------------- |---------| ------------ | ---------- |
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
|
yesj1234/xlsr_enko_exp2
|
yesj1234
| 2023-10-24T04:46:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"./sample_speech.py",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-24T04:44:38Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./sample_speech.py
- generated_from_trainer
metrics:
- wer
model-index:
- name: en-xlsr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en-xlsr
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./SAMPLE_SPEECH.PY - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5356
- Cer: 0.0853
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.7055 | 2.79 | 600 | 0.4911 | 0.1304 | 0.3308 |
| 0.3761 | 5.58 | 1200 | 0.3984 | 0.1053 | 0.2533 |
| 0.278 | 8.37 | 1800 | 0.4070 | 0.1024 | 0.2445 |
| 0.2196 | 11.16 | 2400 | 0.4033 | 0.0974 | 0.2243 |
| 0.1842 | 13.95 | 3000 | 0.4270 | 0.0928 | 0.2106 |
| 0.1533 | 16.74 | 3600 | 0.4582 | 0.0916 | 0.2071 |
| 0.1257 | 19.53 | 4200 | 0.4685 | 0.0901 | 0.2001 |
| 0.1071 | 22.33 | 4800 | 0.5088 | 0.0878 | 0.1965 |
| 0.0967 | 25.12 | 5400 | 0.5224 | 0.0872 | 0.1913 |
| 0.0839 | 27.91 | 6000 | 0.5379 | 0.0860 | 0.1885 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
crumb/uranus-test
|
crumb
| 2023-10-24T04:12:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T04:11:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
minghuiZz/ppo-Huggy
|
minghuiZz
| 2023-10-24T04:08:25Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-24T04:08:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: minghuiZz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AmineAllo/MT-lively-blaze-90
|
AmineAllo
| 2023-10-24T04:03:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:AmineAllo/MT-magic-puddle-87",
"base_model:finetune:AmineAllo/MT-magic-puddle-87",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-24T03:47:37Z |
---
base_model: toobiza/MT-magic-puddle-87
tags:
- generated_from_trainer
model-index:
- name: MT-lively-blaze-90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MT-lively-blaze-90
This model is a fine-tuned version of [toobiza/MT-magic-puddle-87](https://huggingface.co/toobiza/MT-magic-puddle-87) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1930
- eval_loss_ce: 0.0000
- eval_loss_bbox: 0.0269
- eval_cardinality_error: 1.0
- eval_giou: 97.0766
- eval_runtime: 118.1891
- eval_samples_per_second: 2.268
- eval_steps_per_second: 0.567
- epoch: 0.97
- step: 400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.33.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
adsbtb/jobs
|
adsbtb
| 2023-10-24T03:57:48Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T03:47:05Z |
---
library_name: peft
---
# Model Use:
The following model has been trained to be a marketing tool. It is trained to suggest ways Steve Jobs(The master of technnology marketing) would market a product.
Inference scripts need to be written separately.
Fine-tuned version of - NousResearch/Llama-2-7b-chat-hf
Primary Sources:
https://deci.ai/blog/fine-tune-llama-2-with-lora-for-question-answering/
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
juliajoanna/lora-trained-xl-loopy-5
|
juliajoanna
| 2023-10-24T03:54:11Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-23T23:54:01Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks teddy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - juliajoanna/lora-trained-xl-loopy-5
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks teddy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
tingchih/1023
|
tingchih
| 2023-10-24T03:50:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-23T14:33:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: '1023'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1023
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 28.96%
- single_doc_single_modal Recall: 50.21%
- single_doc_single_modal Precision: 26.16%
- single_doc_multi_modals Recall: 25.16%
- single_doc_multi_modals Precision: 45.62%
- multi_docs_single_modal Recall: 17.31%
- single_doc_multi_modals Precision: 40.59%
- multi_docs_multi_modals Recall: 0%
- multi_docs_multi_modals Precision: 0%
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kevin178/sd-class-butterflies-32
|
kevin178
| 2023-10-24T03:11:20Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-10-24T03:11:01Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kevin178/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
zgce/Mistral-RP-0.1-7B-3.5bpw-hb6-exl2
|
zgce
| 2023-10-24T03:05:43Z | 5 | 0 |
transformers
|
[
"transformers",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-21T11:54:27Z |
---
license: apache-2.0
---
### Mistral-RP-0.1 7B EXL2-3.5bpw
- Model creator: [Undi95](https://huggingface.co/Undi95)
- Original model: [Mistral-RP-0.1-7B](https://huggingface.co/Undi95/Mistral-RP-0.1-7B)
### Description
- 3.5bpw per weight
- 应 Surdo 要求为小显存做的试验模型
I converted the model using the convert.py script from the exllamav2 repo:
https://github.com/turboderp/exllamav2
Its documentation:
https://github.com/turboderp/exllamav2/blob/master/doc/convert.md
I used the WikiText-2-v1 dataset for calibration:
https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
|
zgce/Emerhyst-20B-5bpw-hb6-exl2
|
zgce
| 2023-10-24T03:05:23Z | 11 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-20T09:15:52Z |
---
license: apache-2.0
---
### Emerhyst-20B EXL2-5bpw
Model creator: [Undi95](https://huggingface.co/Undi95)
Original model: [Emerhyst-20B](https://huggingface.co/Undi95/Emerhyst-20B)
### Description
- 5bpw per weight
- This is a test for the exllamav2 model
I converted the model using the convert.py script from the exllamav2 repo: https://github.com/turboderp/exllamav2
Its documentation: https://github.com/turboderp/exllamav2/blob/master/doc/convert.md
I used the WikiText-2-v1 dataset for calibration: https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
|
lenmon666/lentest
|
lenmon666
| 2023-10-24T02:50:28Z | 0 | 0 | null |
[
"dataset:laion/dalle-3-dataset",
"region:us"
] | null | 2023-10-24T02:49:21Z |
---
datasets:
- laion/dalle-3-dataset
---
|
zrjin/icefall-asr-aishell-zipformer-2023-10-24
|
zrjin
| 2023-10-24T02:44:49Z | 0 | 0 | null |
[
"tensorboard",
"onnx",
"region:us"
] | null | 2023-10-24T01:56:32Z |
See https://github.com/k2-fsa/icefall/pull/1216 for more information.
|
neromule/ppo-LunarLander-v2
|
neromule
| 2023-10-24T02:25:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-23T23:20:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.46 +/- 13.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Technonia/mistral-instruct-7b-cot-neftune
|
Technonia
| 2023-10-24T02:18:32Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T02:13:52Z |
---
library_name: peft
---
## Overview
Finetuned the mistralai/Mistral-7B-Instruct-v0.1 model on kaist-ai/CoT-Collection dataset.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
amritgupta/qafacteval
|
amritgupta
| 2023-10-24T02:16:38Z | 0 | 0 | null |
[
"arxiv:2112.08542",
"license:bsd",
"region:us"
] | null | 2023-07-28T01:49:45Z |
---
license: bsd
---
@misc{fabbri-etal-2022-qafacteval,
title = {QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization},
author = {Alexander R. Fabbri and Chien-Sheng Wu and Wenhao Liu and Caiming Xiong},
year={2022},
eprint={2112.08542},
archivePrefix={arXiv},
primaryClass={cs.CL},
url = {https://arxiv.org/abs/2112.08542},
}
BSD 3-Clause License
Copyright (c) 2022, Salesforce.com, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of Salesforce.com nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
pragnyas/IDEFICS-9b-instruct-GQA_noSG-full
|
pragnyas
| 2023-10-24T02:11:25Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:HuggingFaceM4/idefics-9b-instruct",
"base_model:adapter:HuggingFaceM4/idefics-9b-instruct",
"region:us"
] | null | 2023-10-24T02:11:22Z |
---
library_name: peft
base_model: HuggingFaceM4/idefics-9b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: ['lm_head', 'embed_tokens']
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
paragdakle/mistral-7b-stem-lw
|
paragdakle
| 2023-10-24T02:08:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T02:02:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
mangaphd/HausaBERTa
|
mangaphd
| 2023-10-24T01:57:33Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"ha",
"af",
"dataset:mangaphd/hausaBERTdatatrain",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-04T09:52:16Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: hausaBERTa
results: []
datasets:
- mangaphd/hausaBERTdatatrain
language:
- ha
- af
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hausaBERTa
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) trained on mangaphd/hausaBERTdatatrain dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0151
- Train Accuracy: 0.9849
- Epoch: 2
The sentiment fine-tuning was done on Hausa Language.
Model Repository : https://github.com/idimohammed/HausaBERTa
## Model description
HausaSentiLex is a pretrained lexicon low resources language model. The model was trained on Hausa Language (Hausa is a Chadic language spoken by the Hausa people in the northern half of Nigeria, Niger, Ghana, Cameroon, Benin and Togo, and the southern half of Niger, Chad and Sudan, with significant minorities in Ivory Coast. It is the most widely spoken language in West Africa, and one of the most widely spoken languages in Africa as a whole).
The model has been shown to obtain competitive downstream performances on text classification on trained language
## Intended uses & limitations
You can use this model with Transformers for sentiment analysis task in Hausa Language.
# Supplementary function
Add the following codes for ease of interpretation
import pandas as pd
def sentiment_analysis(text):
rs = pipe(text)
df = pd.DataFrame(rs)
senti=df['label'][0]
score=df['score'][0]
if senti == 'LABEL_0' and score > 0.5:
lb='NEGATIVE'
elif senti == 'LABEL_1' and score > 0.5:
lb='POSITIVE'
else:
lb='NEUTRAL'
return lb
call sentiment_analysis('Your text here') while using the model
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-06, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.2108 | 0.9168 | 0 |
| 0.1593 | 0.9385 | 1 |
| 0.0151 | 0.9849 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ameerazam08/Mistral-7B-v0.1-Eng-Hin-500
|
ameerazam08
| 2023-10-24T01:42:22Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T01:40:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
thrunlab/t5-base_cola_mare_ar16_ex0_size-32_epochs-5_collected-stats
|
thrunlab
| 2023-10-24T01:39:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-24T01:12:05Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base_cola_mare_ar16_ex0_size-32_epochs-5_collected-stats
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Accuracy
type: accuracy
value: 0.837967401725791
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base_cola_mare_ar16_ex0_size-32_epochs-5_collected-stats
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4445
- Accuracy: 0.8380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5829 | 0.19 | 50 | 0.5713 | 0.6913 |
| 0.4547 | 0.37 | 100 | 0.5431 | 0.7766 |
| 0.4778 | 0.56 | 150 | 0.5055 | 0.8044 |
| 0.4226 | 0.75 | 200 | 0.4686 | 0.8130 |
| 0.4085 | 0.93 | 250 | 0.4968 | 0.8207 |
| 0.3464 | 1.12 | 300 | 0.4517 | 0.8255 |
| 0.328 | 1.31 | 350 | 0.4798 | 0.8226 |
| 0.3369 | 1.49 | 400 | 0.4854 | 0.8284 |
| 0.401 | 1.68 | 450 | 0.5107 | 0.8255 |
| 0.4029 | 1.87 | 500 | 0.4445 | 0.8380 |
| 0.2603 | 2.05 | 550 | 0.5151 | 0.8351 |
| 0.2754 | 2.24 | 600 | 0.4883 | 0.8370 |
| 0.2721 | 2.43 | 650 | 0.4581 | 0.8437 |
| 0.2381 | 2.61 | 700 | 0.4775 | 0.8399 |
| 0.341 | 2.8 | 750 | 0.5196 | 0.8456 |
| 0.2879 | 2.99 | 800 | 0.5360 | 0.8399 |
| 0.2328 | 3.17 | 850 | 0.5213 | 0.8418 |
| 0.2864 | 3.36 | 900 | 0.4942 | 0.8408 |
| 0.2791 | 3.54 | 950 | 0.4915 | 0.8399 |
| 0.3247 | 3.73 | 1000 | 0.4879 | 0.8389 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
akter-sust/Taxi-v3
|
akter-sust
| 2023-10-24T01:37:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T01:37:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="akter-sust/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gustavokpc/ppo-LunarLander-v2
|
gustavokpc
| 2023-10-24T01:37:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-22T20:39:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.58 +/- 21.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lunarlist/pos_thai
|
lunarlist
| 2023-10-24T01:33:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"th",
"dataset:lunarlist/tagging_thai",
"base_model:Geotrend/bert-base-th-cased",
"base_model:finetune:Geotrend/bert-base-th-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-20T09:53:44Z |
---
license: apache-2.0
base_model: Geotrend/bert-base-th-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos_thai
results: []
language: th
widget:
- text: ภาษาไทย ง่าย นิดเดียว
example_title: test1
- text: >-
หนุ่ม เลี้ยง ควาย ใน อิสราเอล เผย รายได้ ต่อ เดือน ทำงาน 4 ปี สร้าง บ้าน ได้
1 หลัง
example_title: test2
datasets:
- lunarlist/tagging_thai
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos_thai
This model is a fine-tuned version of [Geotrend/bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0935
- Precision: 0.9525
- Recall: 0.9540
- F1: 0.9533
- Accuracy: 0.9693
## Model description
This model is train on thai pos_tag datasets to help with pos tagging in Thai language.
## Example
~~~
from transformers import AutoModelForTokenClassification, AutoTokenizer, TokenClassificationPipeline
model = AutoModelForTokenClassification.from_pretrained("lunarlist/pos_thai")
tokenizer = AutoTokenizer.from_pretrained("lunarlist/pos_thai")
pipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer, grouped_entities=True)
outputs = pipeline("ภาษาไทย ง่าย นิดเดียว")
print(outputs)
~~~
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1124 | 1.0 | 7344 | 0.1048 | 0.9505 | 0.9478 | 0.9492 | 0.9670 |
| 0.0866 | 2.0 | 14688 | 0.0935 | 0.9525 | 0.9540 | 0.9533 | 0.9693 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
codefuse-ai/CodeFuse-TestGPT-7B
|
codefuse-ai
| 2023-10-24T01:31:05Z | 13 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"transformer",
"Codefuse",
"CodeLlama",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-21T01:15:01Z |
---
tasks:
- text-generation
tags:
- transformer
- Codefuse
- CodeLlama
studios:
- codefuse-ai/TestGPT-7B-demo
---
### 模型介绍(Introduction)
TestGPT-7B,是蚂蚁研发的测试域大模型。该模型以CodeLlama-7B为基座,进行了测试领域下游任务的微调,包含多语言测试用例生成、测试用例Assert补全。
TestGPT-7B, developed by Ant Group, is a large-scale model designed for software quality domains. Built upon the foundation of CodeLlama-7B, this model has undergone fine-tuning for downstream tasks, including multi-language test case generation and test case assertion completion.
- 多语言测试用例生成(Multi-language test case generation)
测试用例生成一直以来都是学术界和工业界非常关注的领域,近年来不断有新产品或工具孵化出来,如EvoSuite、Randoop、SmartUnit等。然而传统的用例生成存在其难以解决的痛点问题,基于大模型的测试用例生成在测试用例可读性、测试场景完整度、多语言支持方面都优于传统用例生成工具。
TestGPT-7B中重点支持了多语言测试用例生成,在我们本次开源的版本中首先包含了Java、Python、Javascript的测试用例生成能力,下一版本中逐步开放Go、C++等语言。
Test case generation has always been a highly regarded field in both academia and industry. In recent years, many products or tools have emerged, such as EvoSuite, Randoop, SmartUnit, etc. However, traditional test case generation tools face challenges that are very difficult to overcome. Test case generation based on large models is superior to traditional test case generation tools in terms of test case readability, test scenario completeness, and multi-language support.
In TestGPT-7B, there is a strong focus on supporting test case generation for multiple languages. In this open-source version, we primarily include test case generation capabilities for Java, Python, and Javascript. In the next version, we will introduce support for languages like Go and C++.
- 测试用例Assert补全(Test case assertion completion)
对当前测试用例现状的分析与探查时,我们发现代码仓库中存在一定比例的存量测试用例中未包含Assert。没有Assert的测试用例虽然能够在回归过程中执行通过,却无法发现问题。因此我们拓展了测试用例Assert自动补全这一场景。通过该模型能力,结合一定的工程化配套,可以实现对全库测试用例的批量自动补全,智能提升项目质量水位。
Based on the analysis of the current state of test cases, we have found that there is a certain proportion of existing test cases that do not include Assert statements. Test cases without Assert statements may pass during regression phase but are unable to detect any bugs. Therefore, we have expanded the scenario of automatic completion of Assert statements in test cases. With the capabilities of this model and some engineering support, we can achieve automatic completion of test cases across the entire code repository, intelligently improving the quality of the project.
后续我们会持续迭代模型能力:1)不断加入更多令人激动的测试域应用场景,如领域知识问答、测试场景分析等;2)以7B为基础,逐步扩展至13B、34B模型。欢迎关注!
In the future, we will continue to iterate on the model capabilities. 1) We will continuously contribute more exciting Software Quality Related applications, such as test scenario analysis. 2) Building on the foundation of 7B, we will gradually expand to 13B and 34B models. Stay tuned for updates!
### 依赖项(Requirements)
- python>=3.8
- pytorch>=2.0.0
- CUDA 11.4
- transformers==4.33.2
### 评测表现(Testcase Evaluation)
- TestGPT-7B测试用例生成(Multi-language test case generation)
针对模型支持的三种语言:Java、Python、Javascript,Pass@1评测结果如下:
Currently, the model supports test case generation for three languages: Java, Python, and JavaScript. The evaluation results for Pass@1 are as follows:
| Model | Java pass@1 | Java Average number of test scenarios | Python pass@1 | Python Average number of test scenarios | Javascript pass@1 | Javascript Average number of test scenarios |
| --- | --- | --- | --- | --- | --- | --- |
| TestGPT-7B | 48.6% | 4.37 | 35.67% | 3.56 | 36% | 2.76 |
| CodeLlama-13B-Instruct | 40.54% | 1.08 | 30.57% | 1.65 | 31.7% | 3.13 |
| Qwen-14B-Chat | 10.81% | 2.78 | 15.9% | 1.32 | 9.15% | 4.22 |
| Baichuan2-13B-Chat | 13.5% | 2.24 | 12.7% | 2.12 | 6.1% | 3.31 |
注:由于当前已开源的Base模型(如CodeLlama-13B/Qwen-14B/Baichuan2-13B等)不具备测试用例生成能力,因此在评测结果对比时,我们都选择了官方对齐后的chat模型(如CodeLlama-13B-Instruct/Qwen-14B-Chat/Baichuan2-13B-Chat)
Note: Since the currently open-sourced Base models (such as CodeLlama-13B/Qwen-14B/Baichuan2-13B) do not have the capability to generate test cases, we have chosen the official aligned chat models (such as CodeLlama-13B-Instruct/Qwen-14B-Chat/Baichuan2-13B-Chat) for comparison in the evaluation results.
- TestGPT-7B测试用例Assert补全(Test case assertion completion)
目前模型支持Java用例的Assert补全,Pass@1评测结果如下:
Currently, the model supports assertion completion for Java test cases. The evaluation results for Pass@1 are as follows:
| Model | pass@1 | Percentage of strong validation |
| --- | --- | --- |
| TestGPT-7B | 71.1% | 100% |
与此同时,我们也开源了测试用例生成、测试用例Assert补全的评测集,以方便进行模型效果对比与复现。评测集可在eval_data文件夹下找到。
Meanwhile, we have also open-sourced the evaluation dataset for test case generation and test case assertion completion, facilitating model performance comparison and reproduction. The evaluation dataset can be found in the eval_data folder.
### 快速使用(QuickStart)
下面我们展示使用TestGPT-7B模型,进行测试用例生成、测试用例Assert补全的示例:
Below are examples of test case generation and test case assertion completion using the TestGPT-7B model:
```python
from modelscope import AutoModelForCausalLM, AutoTokenizer, snapshot_download, AutoConfig
import torch
HUMAN_ROLE_START_TAG = "<s>human\n"
BOT_ROLE_START_TAG = "<s>bot\n"
if __name__ == '__main__':
# 模型地址, 可以替换为本地模型地址
model_dir = snapshot_download('codefuse-ai/TestGPT-7B', revision = 'v1.0.0')
# 加载tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True, use_fast=False, legacy=False)
eos_token = '</s>'
pad_token = '<unk>'
try:
tokenizer.eos_token = eos_token
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids(eos_token)
except:
print(tokenizer.eos_token, tokenizer.eos_token_id)
try:
tokenizer.pad_token = pad_token
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(pad_token)
except:
print(tokenizer.pad_token, tokenizer.pad_token_id)
tokenizer.padding_side = "left"
print(f"tokenizer's eos_token: {tokenizer.eos_token}, pad_token: {tokenizer.pad_token}")
print(f"tokenizer's eos_token_id: {tokenizer.eos_token_id}, pad_token_id: {tokenizer.pad_token_id}")
# 配置
config, unused_kwargs = AutoConfig.from_pretrained(
model_dir,
use_flash_attn=True,
use_xformers=True,
trust_remote_code=True,
return_unused_kwargs=True)
# 加载模型
model = AutoModelForCausalLM.from_pretrained(
model_dir,
config=config,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
use_safetensors=False,
).eval()
# 推理生成测试用例
# 被测代码prompt,分为用例生成和assert补全
# 用例生成格式
prompt = '为以下Python代码生成单元测试\n' \
'```Python\ndef add(lst):\n return sum([lst[i] for i in range(1, len(lst), 2) if lst[i]%2 == 0])\n```\n'
# assert补全格式,目前仅支持java语言
# prompt = '下面是被测代码\n' \
# '```java\n' \
# 'public class BooleanUtils {\n ' \
# 'public static boolean and(final boolean... array) {\n ' \
# 'ObjectUtils.requireNonEmpty(array, "array");\n ' \
# 'for (final boolean element : array) {\n ' \
# 'if (!element) {\n return false;\n }\n }\n ' \
# 'return true;\n }\n}\n```\n' \
# '下面代码是针对上面被测代码生成的用例,请补全用例,生成assert校验\n' \
# '```java\n' \
# '@Test\npublic void testAnd_withAllTrueInputs() {\n ' \
# 'boolean[] input = new boolean[] {true, true, true};\n ' \
# 'boolean result = BooleanUtils.and(input);\n}\n\n@Test\npublic void testAnd_withOneFalseInput() {\n ' \
# 'boolean[] input = new boolean[] {true, false, true};\n ' \
# 'boolean result = BooleanUtils.and(input);\n}\n' \
# '```\n'
# 输入格式化处理
prompt = f"{HUMAN_ROLE_START_TAG}{prompt}{BOT_ROLE_START_TAG}"
inputs = tokenizer(prompt, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda")
# 推理
outputs = model.generate(
inputs=inputs["input_ids"],
max_new_tokens=2048,
top_p=0.95,
temperature=0.2,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=1,
)
# 结果处理
outputs_len = len(outputs)
print(f"output len is: {outputs_len}")
for index in range(0, outputs_len):
print(f"generate index: {index}")
gen_text = tokenizer.decode(outputs[index], skip_special_tokens=True)
print(gen_text)
print("===================")
```
|
xz97/AlpaCare-llama-13b
|
xz97
| 2023-10-24T01:28:02Z | 6 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2310.14558",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-23T20:28:09Z |
---
license: cc-by-nc-4.0
language:
- en
---
# AlpaCare: Instruction-tuned Large Language Models for Medical Applications
<p align="center">
<img src="https://raw.githubusercontent.com/XZhang97666/AlpaCare/master/plots/logo.png" alt="Alt Text" width="200" height="200">
</p>
This is the model weight of *AlpaCare*-LLaMA-13B. *AlpaCare* are LLMs tuned on medical instructions.
Github page:
[https://github.com/XZhang97666/AlpaCare/](https://github.com/XZhang97666/AlpaCare/)
## Citation:
If you think it is a useful repo, please cite the paper:
```
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
xz97/AlpaCare-llama2-7b
|
xz97
| 2023-10-24T01:27:13Z | 255 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2310.14558",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-23T19:36:20Z |
---
license: cc-by-nc-4.0
language:
- en
---
# AlpaCare: Instruction-tuned Large Language Models for Medical Applications
<p align="center">
<img src="https://raw.githubusercontent.com/XZhang97666/AlpaCare/master/plots/logo.png" alt="Alt Text" width="200" height="200">
</p>
This is the model weight of *AlpaCare*-LLaMA2-7B. *AlpaCare* are LLMs tuned on medical instructions.
Github page:
[https://github.com/XZhang97666/AlpaCare/](https://github.com/XZhang97666/AlpaCare/)
## Citation:
If you think it is a useful repo, please cite the paper:
```
@misc{zhang2023alpacareinstructiontuned,
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application},
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold},
year={2023},
eprint={2310.14558},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kargaranamir/T5R-base
|
kargaranamir
| 2023-10-24T01:27:07Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:tatsu-lab/alpaca",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-14T17:25:28Z |
---
license: mit
datasets:
- tatsu-lab/alpaca
tags:
- generated_from_trainer
- text2text-generation
model-index:
- name: T5R-base
results: []
pipeline_tag: text2text-generation
language:
- en
widget:
- text: |
Instruction: X
Output: Adolf Hitler (German: [ˈadɔlf ˈhɪtlɐ] (listen); 20 April 1889 – 30 April 1945) was an Austrian-born German politician who was the dictator of Germany from 1933 until his suicide in 1945. He rose to power as the leader of the Nazi Party,[a] becoming the chancellor in 1933 and then taking the title of Führer und Reichskanzler in 1934.[b] During his dictatorship, he initiated World War II in Europe by invading Poland on 1 September 1939. He was closely involved in military operations throughout the war and was central to the perpetration of the Holocaust: the genocide of about six million Jews and millions of other victims.
X:
example_title: Example 1
- text: |
Instruction: X
Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish.
What kind of instruction could this be the answer to?
X:
example_title: Example 2
---
# T5-Reverse (T5R)
This model can generate prompts (instructions) for any text!
This model is an instruction-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) but in **reverse format**!
## How to Use the Model
You can use the `transformers` library to load and utilize the T5-Reverse (T5R) model for generating prompts based on text. Here's an example of how to do it:
```python
>>> # Import required libraries
>>> import torch
>>> from transformers import pipeline
>>> # Load the model and tokenizer using the pipeline from Hugging Face Hub
>>> inference = pipeline("text2text-generation", model="kargaranamir/T5R-base")
>>> # Example instruction and prompt
>>> sample = '''
>>> Instruction: X
>>> Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish.
>>> What kind of instruction could this be the answer to?
>>> X:
>>> '''
>>> # Generate a response using the model
>>> res = inference(sample)
>>> # Print the generated response
>>> print(res)
[{'generated_text': 'Instruction: Generate three recommendations for a healthy diet.'}]
```
## Citation
If you find this model/approach useful, make a link to the huggingface model.
|
EliasNsilva/PyramidsAgent
|
EliasNsilva
| 2023-10-24T01:17:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-10-24T01:17:52Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EliasNsilva/PyramidsAgent
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AmineAllo/MT-magic-puddle-87
|
AmineAllo
| 2023-10-24T01:04:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:AmineAllo/table-transformer-stellar-vortex-81",
"base_model:finetune:AmineAllo/table-transformer-stellar-vortex-81",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-23T23:16:51Z |
---
base_model: toobiza/table-transformer-stellar-vortex-81
tags:
- generated_from_trainer
model-index:
- name: MT-magic-puddle-87
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MT-magic-puddle-87
This model is a fine-tuned version of [toobiza/table-transformer-stellar-vortex-81](https://huggingface.co/toobiza/table-transformer-stellar-vortex-81) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1979
- Loss Ce: 0.0000
- Loss Bbox: 0.0275
- Cardinality Error: 1.0
- Giou: 96.9810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Loss Ce | Loss Bbox | Cardinality Error | Giou |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:-----------------:|:-------:|
| 0.4142 | 0.12 | 50 | 0.2543 | 0.0000 | 0.0350 | 1.0 | 96.0232 |
| 0.2876 | 0.24 | 100 | 0.2384 | 0.0000 | 0.0329 | 1.0 | 96.2958 |
| 0.2522 | 0.36 | 150 | 0.2250 | 0.0000 | 0.0311 | 1.0 | 96.5247 |
| 0.2324 | 0.48 | 200 | 0.2360 | 0.0000 | 0.0327 | 1.0 | 96.3714 |
| 0.1855 | 0.6 | 250 | 0.2155 | 0.0000 | 0.0296 | 1.0 | 96.6185 |
| 0.2339 | 0.72 | 300 | 0.2184 | 0.0000 | 0.0302 | 1.0 | 96.6205 |
| 0.2416 | 0.85 | 350 | 0.2034 | 0.0000 | 0.0279 | 1.0 | 96.8059 |
| 0.213 | 0.97 | 400 | 0.2072 | 0.0000 | 0.0286 | 1.0 | 96.7959 |
| 0.2173 | 1.09 | 450 | 0.2089 | 0.0000 | 0.0287 | 1.0 | 96.7273 |
| 0.2038 | 1.21 | 500 | 0.2239 | 0.0000 | 0.0311 | 1.0 | 96.5835 |
| 0.1789 | 1.33 | 550 | 0.2235 | 0.0000 | 0.0312 | 1.0 | 96.6211 |
| 0.159 | 1.45 | 600 | 0.2111 | 0.0000 | 0.0293 | 1.0 | 96.7661 |
| 0.1629 | 1.57 | 650 | 0.2071 | 0.0000 | 0.0286 | 1.0 | 96.7952 |
| 0.2541 | 1.69 | 700 | 0.2016 | 0.0000 | 0.0280 | 1.0 | 96.9076 |
| 0.1917 | 1.81 | 750 | 0.2017 | 0.0000 | 0.0279 | 1.0 | 96.8974 |
| 0.1642 | 1.93 | 800 | 0.2058 | 0.0000 | 0.0283 | 1.0 | 96.7995 |
| 0.1598 | 2.05 | 850 | 0.1913 | 0.0000 | 0.0263 | 1.0 | 96.9992 |
| 0.1551 | 2.17 | 900 | 0.1958 | 0.0000 | 0.0269 | 1.0 | 96.9452 |
| 0.1698 | 2.29 | 950 | 0.2028 | 0.0000 | 0.0281 | 1.0 | 96.8889 |
| 0.1414 | 2.42 | 1000 | 0.2063 | 0.0000 | 0.0287 | 1.0 | 96.8551 |
| 0.1605 | 2.54 | 1050 | 0.1988 | 0.0000 | 0.0276 | 1.0 | 96.9509 |
| 0.1929 | 2.66 | 1100 | 0.2085 | 0.0000 | 0.0290 | 1.0 | 96.8305 |
| 0.1563 | 2.78 | 1150 | 0.2031 | 0.0000 | 0.0282 | 1.0 | 96.9069 |
| 0.1263 | 2.9 | 1200 | 0.2004 | 0.0000 | 0.0278 | 1.0 | 96.9343 |
| 0.1588 | 3.02 | 1250 | 0.2031 | 0.0000 | 0.0282 | 1.0 | 96.8919 |
| 0.2023 | 3.14 | 1300 | 0.1986 | 0.0000 | 0.0276 | 1.0 | 96.9607 |
| 0.1434 | 3.26 | 1350 | 0.1981 | 0.0000 | 0.0275 | 1.0 | 96.9762 |
| 0.1436 | 3.38 | 1400 | 0.1995 | 0.0000 | 0.0278 | 1.0 | 96.9637 |
| 0.1208 | 3.5 | 1450 | 0.1996 | 0.0000 | 0.0278 | 1.0 | 96.9668 |
| 0.1371 | 3.62 | 1500 | 0.1993 | 0.0000 | 0.0277 | 1.0 | 96.9602 |
| 0.1585 | 3.74 | 1550 | 0.1983 | 0.0000 | 0.0275 | 1.0 | 96.9670 |
| 0.1356 | 3.86 | 1600 | 0.1980 | 0.0000 | 0.0275 | 1.0 | 96.9826 |
| 0.1306 | 3.99 | 1650 | 0.1979 | 0.0000 | 0.0275 | 1.0 | 96.9810 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
poseidon-rust2/Llama-2-7B-GPTQ-QLora
|
poseidon-rust2
| 2023-10-24T00:50:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-24T00:47:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.01
- desc_act: False
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- disable_exllama: False
### Framework versions
- PEFT 0.5.0
|
hung200504/bert-covid
|
hung200504
| 2023-10-24T00:42:02Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-24T00:41:47Z |
---
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: bert-covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-covid
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.474 | 0.04 | 5 | 4.3730 |
| 3.9933 | 0.09 | 10 | 3.2783 |
| 3.0206 | 0.13 | 15 | 2.0289 |
| 1.9741 | 0.18 | 20 | 1.3879 |
| 1.4351 | 0.22 | 25 | 1.1733 |
| 1.5916 | 0.26 | 30 | 1.1623 |
| 0.5383 | 0.31 | 35 | 1.1952 |
| 0.7776 | 0.35 | 40 | 1.1920 |
| 1.1785 | 0.39 | 45 | 1.1216 |
| 1.1334 | 0.44 | 50 | 1.0412 |
| 0.7445 | 0.48 | 55 | 1.0829 |
| 0.6512 | 0.53 | 60 | 1.0443 |
| 0.7516 | 0.57 | 65 | 1.0089 |
| 0.5953 | 0.61 | 70 | 0.9273 |
| 0.8589 | 0.66 | 75 | 0.8947 |
| 0.7561 | 0.7 | 80 | 0.9009 |
| 0.9561 | 0.75 | 85 | 0.9006 |
| 0.7731 | 0.79 | 90 | 0.8482 |
| 0.8269 | 0.83 | 95 | 0.8380 |
| 0.9884 | 0.88 | 100 | 0.8200 |
| 0.9187 | 0.92 | 105 | 0.8775 |
| 0.585 | 0.96 | 110 | 0.8499 |
| 0.6835 | 1.01 | 115 | 0.8314 |
| 0.6668 | 1.05 | 120 | 0.7491 |
| 0.5558 | 1.1 | 125 | 0.7154 |
| 0.4491 | 1.14 | 130 | 0.8212 |
| 1.0667 | 1.18 | 135 | 0.8477 |
| 0.4472 | 1.23 | 140 | 0.7636 |
| 0.6892 | 1.27 | 145 | 0.7493 |
| 0.66 | 1.32 | 150 | 0.6932 |
| 0.5044 | 1.36 | 155 | 0.7675 |
| 0.5329 | 1.4 | 160 | 0.7406 |
| 0.2223 | 1.45 | 165 | 0.8099 |
| 0.5495 | 1.49 | 170 | 0.8758 |
| 0.5534 | 1.54 | 175 | 0.8476 |
| 0.4962 | 1.58 | 180 | 0.7953 |
| 0.7477 | 1.62 | 185 | 0.7610 |
| 0.7293 | 1.67 | 190 | 0.8357 |
| 0.6205 | 1.71 | 195 | 0.7339 |
| 0.5687 | 1.75 | 200 | 0.6908 |
| 0.884 | 1.8 | 205 | 0.6706 |
| 0.5928 | 1.84 | 210 | 0.6546 |
| 0.3209 | 1.89 | 215 | 0.6505 |
| 0.7585 | 1.93 | 220 | 0.6486 |
| 0.8501 | 1.97 | 225 | 0.6272 |
| 0.1664 | 2.02 | 230 | 0.6211 |
| 0.4483 | 2.06 | 235 | 0.6550 |
| 0.3361 | 2.11 | 240 | 0.6604 |
| 0.3085 | 2.15 | 245 | 0.6520 |
| 0.2407 | 2.19 | 250 | 0.6695 |
| 0.3418 | 2.24 | 255 | 0.6687 |
| 0.3165 | 2.28 | 260 | 0.6730 |
| 0.5811 | 2.32 | 265 | 0.6546 |
| 0.3516 | 2.37 | 270 | 0.6579 |
| 0.3136 | 2.41 | 275 | 0.6688 |
| 0.2508 | 2.46 | 280 | 0.6921 |
| 0.3463 | 2.5 | 285 | 0.7124 |
| 0.3603 | 2.54 | 290 | 0.7160 |
| 0.4455 | 2.59 | 295 | 0.6995 |
| 0.5433 | 2.63 | 300 | 0.6919 |
| 0.3411 | 2.68 | 305 | 0.6898 |
| 0.6065 | 2.72 | 310 | 0.6922 |
| 0.6258 | 2.76 | 315 | 0.6955 |
| 0.283 | 2.81 | 320 | 0.7008 |
| 0.6233 | 2.85 | 325 | 0.6988 |
| 0.3899 | 2.89 | 330 | 0.6949 |
| 0.238 | 2.94 | 335 | 0.6916 |
| 0.3166 | 2.98 | 340 | 0.6900 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ichiv/dqn-SpaceInvadersNoFrameskip-v4
|
ichiv
| 2023-10-24T00:34:14Z | 14 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-24T00:23:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 112.00 +/- 55.37
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ichiv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ichiv -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ichiv
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 256),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.2),
('frame_stack', 8),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 1000),
('n_timesteps', 100.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 2),
('normalize', False)])
```
|
gangkongkong/llama-2-ko-7b-gangkk-alpaca-all-epoch3-nomerge
|
gangkongkong
| 2023-10-24T00:18:30Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:beomi/llama-2-ko-7b",
"base_model:adapter:beomi/llama-2-ko-7b",
"region:us"
] | null | 2023-10-24T00:18:15Z |
---
library_name: peft
base_model: beomi/llama-2-ko-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
EliasNsilva/ppo-SnowballTarget
|
EliasNsilva
| 2023-10-24T00:18:29Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-10-24T00:18:25Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EliasNsilva/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dmmagdal/Inkpunk-Diffusion-ONNX
|
dmmagdal
| 2023-10-24T00:16:45Z | 4 | 0 |
diffusers
|
[
"diffusers",
"onnx",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-23T23:59:08Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
---
# Model Card for Model ID
This is a copy of Envvi's [Inkpunk Diffusion](https://huggingface.co/Envvi/Inkpunk-Diffusion) model (v1) ported to ONNX. Please see their model page for more information.
|
luisvidal-lv/comparation_2_text
|
luisvidal-lv
| 2023-10-24T00:11:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-23T23:36:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: comparation_2_text
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8504901960784313
- name: F1
type: f1
value: 0.8920353982300885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# comparation_2_text
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5581
- Accuracy: 0.8505
- F1: 0.8920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5296 | 1.09 | 500 | 0.4796 | 0.8162 | 0.8654 |
| 0.3365 | 2.18 | 1000 | 0.5581 | 0.8505 | 0.8920 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
jaweed123/Taxi-v3
|
jaweed123
| 2023-10-23T23:37:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-23T23:37:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jaweed123/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jaweed123/q-FrozenLake-v1-4x4-noSlippery
|
jaweed123
| 2023-10-23T23:34:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-23T23:34:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jaweed123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Panchovix/llama-2-70b-Guanaco-QLoRA-fp16-safetensors
|
Panchovix
| 2023-10-23T23:28:02Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-09T22:31:28Z |
---
license: llama2
---
Safetensors conversion of llama-2-70b-Guanaco-QLoRA-fp16 (https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16/tree/main), to be used directly on transformers, or to create quants with exllamav2.
|
rianders/finetuning-sentiment-model-3000-samples
|
rianders
| 2023-10-23T23:24:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-23T22:28:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3476
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Tokenizers 0.14.1
|
GuysTrans/bart-base-re-attention
|
GuysTrans
| 2023-10-23T23:23:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GuysTrans/bart-base-re-attention",
"base_model:finetune:GuysTrans/bart-base-re-attention",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-08T04:33:16Z |
---
license: apache-2.0
base_model: GuysTrans/bart-base-re-attention
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-re-attention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-re-attention
This model is a fine-tuned version of [GuysTrans/bart-base-re-attention](https://huggingface.co/GuysTrans/bart-base-re-attention) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.7017
- Rouge1: 9.5572
- Rouge2: 0.2974
- Rougel: 6.8156
- Rougelsum: 8.7252
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.1174 | 1.0 | 22880 | 9.7017 | 9.5572 | 0.2974 | 6.8156 | 8.7252 | 20.0 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
royallab/LLaMA2-13B-TiefighterLR-exl2
|
royallab
| 2023-10-23T23:10:59Z | 0 | 1 | null |
[
"en",
"license:llama2",
"region:us"
] | null | 2023-10-23T21:55:47Z |
---
license: llama2
language:
- en
---
## Information
This is a Exl2 quantized version of [LLaMA2-13B-TiefighterLR](https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR)
Please refer to the original creator for more information.
Calibration dataset: [wikitext](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/test)
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
- 8bpw: 8 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Please ask for more bpws in the community tab if necessary.
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
|
carlosdanielhernandezmena/stt_es_quartznet15x5_ft_ep53_944h
|
carlosdanielhernandezmena
| 2023-10-23T23:05:19Z | 8 | 1 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"pytorch",
"NeMo",
"QuartzNet",
"QuartzNet15x5",
"spanish",
"es",
"dataset:ciempiess/ciempiess_light",
"dataset:ciempiess/ciempiess_balance",
"dataset:ciempiess/ciempiess_fem",
"dataset:common_voice",
"dataset:hub4ne_es_LDC98S74",
"dataset:callhome_es_LDC96S35",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-11-23T20:27:07Z |
---
language:
- es
library_name: nemo
datasets:
- ciempiess/ciempiess_light
- ciempiess/ciempiess_balance
- ciempiess/ciempiess_fem
- common_voice
- hub4ne_es_LDC98S74
- callhome_es_LDC96S35
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- pytorch
- NeMo
- QuartzNet
- QuartzNet15x5
- spanish
license: cc-by-4.0
model-index:
- name: stt_es_quartznet15x5_ft_ep53_944h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Test)
type: mozilla-foundation/common_voice_10_0
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 17.99
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Dev)
type: mozilla-foundation/common_voice_10_0
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 15.97
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CIEMPIESS-TEST
type: ciempiess/ciempiess_test
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 19.48
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 1997 Spanish Broadcast News Speech (HUB4-NE)
type: HUB4NE_LDC98S74
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 14.48
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Test)
type: callhome_LDC96S35
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 55.43
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Dev)
type: callhome_LDC96S35
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 56.34
---
# stt_es_quartznet15x5_ft_ep53_944h
**Paper:** [The state of end-to-end systems for Mexican Spanish speech recognition](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/viewFile/6485/3892)
**NOTE! This model was trained with the NeMo version: nemo-toolkit==1.10.0**
The "stt_es_quartznet15x5_ft_ep53_944h" is an acoustic model created with NeMo which is suitable for Automatic Speech Recognition in Spanish.
It is the result of fine-tuning the model ["stt_es_quartznet15x5.nemo"](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_es_quartznet15x5) with around 944 hours of Spanish data gathered or developed by the [CIEMPIESS-UNAM Project](https://huggingface.co/ciempiess) since 2012. Most of the data is available at the the CIEMPIESS-UNAM Project homepage http://www.ciempiess.org/. The rest can be found in public repositories such as [LDC](https://www.ldc.upenn.edu/) or [OpenSLR](https://openslr.org/)
The specific list of corpora used to fine-tune the model is:
- [CIEMPIESS-LIGHT (18h25m)](https://catalog.ldc.upenn.edu/LDC2017S23)
- [CIEMPIESS-BALANCE (18h20m)](https://catalog.ldc.upenn.edu/LDC2018S11)
- [CIEMPIESS-FEM (13h54m)](https://catalog.ldc.upenn.edu/LDC2019S07)
- [CHM150 (1h38m)](https://catalog.ldc.upenn.edu/LDC2016S04)
- [TEDX_SPANISH (24h29m)](https://openslr.org/67/)
- [LIBRIVOX_SPANISH (73h01m)](https://catalog.ldc.upenn.edu/LDC2020S01)
- [WIKIPEDIA_SPANISH (25h37m)](https://catalog.ldc.upenn.edu/LDC2021S07)
- [VOXFORGE_SPANISH (49h42m)](http://www.voxforge.org/es)
- [MOZILLA COMMON VOICE 10.0 (320h22m)](https://commonvoice.mozilla.org/es)
- [HEROICO (16h33m)](https://catalog.ldc.upenn.edu/LDC2006S37)
- [LATINO-40 (6h48m)](https://catalog.ldc.upenn.edu/LDC95S28)
- [CALLHOME_SPANISH (13h22m)](https://catalog.ldc.upenn.edu/LDC96S35)
- [HUB4NE_SPANISH (31h41m)](https://catalog.ldc.upenn.edu/LDC98S74)
- [FISHER_SPANISH (127h22m)](https://catalog.ldc.upenn.edu/LDC2010S01)
- [Chilean Spanish speech data set (7h08m)](https://openslr.org/71/)
- [Colombian Spanish speech data set (7h34m)](https://openslr.org/72/)
- [Peruvian Spanish speech data set (9h13m)](https://openslr.org/73/)
- [Argentinian Spanish speech data set (8h01m)](https://openslr.org/61/)
- [Puerto Rico Spanish speech data set (1h00m)](https://openslr.org/74/)
- [MediaSpeech Spanish (10h00m)](https://openslr.org/108/)
- [DIMEX100-LIGHT (6h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [DIMEX100-NIÑOS (08h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GOLEM-UNIVERSUM (00h10m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GLISSANDO (6h40m)](https://glissando.labfon.uned.es/es)
- TELE_con_CIENCIA (28h16m) **Unplished Material**
- UNSHAREABLE MATERIAL (118h22m) **Not available for sharing**
The fine-tuning process was perform during October (2022) in the servers of the [Language and Voice Laboratory](https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
```bibtex
@misc{mena2022quartznet15x5spanish,
title={Acoustic Model in Spanish: stt\_es\_quartznet15x5\_ft\_ep53\_944h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/stt_es_quartznet15x5_ft_ep53_944h},
year={2022}
}
```
# Acknowledgements
The author wants to thank to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) at the [Facultad de Ingeniería (FI)](https://www.ingenieria.unam.mx/) of the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/). He also thanks to the social service students for all the hard work.
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. The author also thanks to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
thekenken/mt5small-finetuned-summary-en-fr
|
thekenken
| 2023-10-23T23:01:46Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:thekenken/mt5-small-finetuned-summary-en-fr",
"base_model:finetune:thekenken/mt5-small-finetuned-summary-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-10-23T10:24:01Z |
---
license: apache-2.0
base_model: thekenken/mt5-small-finetuned-summary-en-fr
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-summary-en-fr-finetuned-summary-en-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-summary-en-fr-finetuned-summary-en-fr
This model is a fine-tuned version of [thekenken/mt5-small-finetuned-summary-en-fr](https://huggingface.co/thekenken/mt5-small-finetuned-summary-en-fr) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2663
- Rouge1: 0.226
- Rouge2: 0.0761
- Rougel: 0.1868
- Rougelsum: 0.1867
- Gen Len: 18.9787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.4654 | 1.0 | 74615 | 2.3755 | 0.2166 | 0.0691 | 0.1789 | 0.1789 | 18.9725 |
| 2.3603 | 2.0 | 149230 | 2.3189 | 0.2222 | 0.0728 | 0.1835 | 0.1836 | 18.9777 |
| 2.3052 | 3.0 | 223845 | 2.2922 | 0.2246 | 0.0749 | 0.1854 | 0.1854 | 18.9823 |
| 2.272 | 4.0 | 298460 | 2.2733 | 0.2252 | 0.0757 | 0.1861 | 0.1861 | 18.9801 |
| 2.2532 | 5.0 | 373075 | 2.2663 | 0.226 | 0.0761 | 0.1868 | 0.1867 | 18.9787 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
royallab/LLaMA2-13B-Tiefighter-exl2
|
royallab
| 2023-10-23T22:50:58Z | 42 | 10 | null |
[
"en",
"license:llama2",
"region:us"
] | null | 2023-10-23T16:05:53Z |
---
license: llama2
language:
- en
---
## Information
This is a Exl2 quantized version of [LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
Please refer to the original creator for more information.
Calibration dataset: [wikitext](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-v1/test)
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
- 8bpw: 8 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Please ask for more bpws in the community tab if necessary.
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
|
konstantindobler/xlm-roberta-base-focus-extend-isixhosa
|
konstantindobler
| 2023-10-23T22:48:14Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"xh",
"dataset:cc100",
"arxiv:2305.14481",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-23T22:38:46Z |
---
language: xh
license: mit
datasets: cc100
---
# xlm-roberta-base-focus-extend-isixhosa
XLM-R adapted to isiXhosa using "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models".
Code: https://github.com/konstantinjdobler/focus
Paper: https://arxiv.org/abs/2305.14481
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-isixhosa")
model = AutoModelForMaskedLM.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-isixhosa")
# Use model and tokenizer as usual
```
## Details
The model is based on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was adapted to isiXhosa.
The original multilingual tokenizer was extended with the top 30k tokens of a language-specific isiXhosa tokenizer. The new embeddings were initialized with FOCUS.
The model was then trained on data from CC100 for 390k optimizer steps. More details and hyperparameters can be found [in the paper](https://arxiv.org/abs/2305.14481).
## Disclaimer
The web-scale dataset used for pretraining and tokenizer training (CC100) might contain personal and sensitive information.
Such behavior needs to be assessed carefully before any real-world deployment of the models.
## Citation
Please cite FOCUS as follows:
```bibtex
@misc{dobler-demelo-2023-focus,
title={FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models},
author={Konstantin Dobler and Gerard de Melo},
year={2023},
eprint={2305.14481},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tp05/story-gen
|
tp05
| 2023-10-23T22:45:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:tp05/shortStories",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-23T22:26:51Z |
---
datasets:
- tp05/shortStories
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on tp05/shortStories dataset.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** tp05
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Llama-2-7b-hf
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/tp05/shortStories
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
carlosdanielhernandezmena/whisper-tiny-faroese-8k-steps-100h
|
carlosdanielhernandezmena
| 2023-10-23T22:45:32Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"faroese",
"whisper-tiny",
"ravnur-project",
"faroe-islands",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-12T01:23:49Z |
---
language: fo
datasets:
- carlosdanielhernandezmena/ravnursson_asr
tags:
- audio
- automatic-speech-recognition
- faroese
- whisper-tiny
- ravnur-project
- faroe-islands
license: cc-by-4.0
model-index:
- name: whisper-tiny-faroese-8k-steps-100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Test)
type: carlosdanielhernandezmena/ravnursson_asr
split: test
args:
language: fo
metrics:
- name: WER
type: wer
value: 61.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Dev)
type: carlosdanielhernandezmena/ravnursson_asr
split: validation
args:
language: fo
metrics:
- name: WER
type: wer
value: 63.79
---
# whisper-tiny-faroese-8k-steps-100h
**Paper:** [ASR Language Resources for Faroese](https://aclanthology.org/2023.nodalida-1.4.pdf)
The "whisper-tiny-faroese-8k-steps-100h" is an acoustic model suitable for Automatic Speech Recognition in Faroese. It is the result of fine-tuning the model "openai/whisper-tiny" with 100 hours of Faroese data released by the Ravnur Project (https://maltokni.fo/en/) from the Faroe Islands.
The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.
The fine-tuning process was perform during March (2023) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/whisper-tiny-faroese-8k-steps-100h"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda")
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("carlosdanielhernandezmena/ravnursson_asr",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def map_to_pred(batch):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
batch["reference"] = processor.tokenizer._normalize(batch['normalized_text'])
with torch.no_grad():
predicted_ids = model.generate(input_features.to("cuda"))[0]
transcription = processor.decode(predicted_ids)
batch["prediction"] = processor.tokenizer._normalize(transcription)
return batch
#Do the evaluation
result = ds.map(map_to_pred)
#Compute the overall WER now.
from evaluate import load
wer = load("wer")
WER=100 * wer.compute(references=result["reference"], predictions=result["prediction"])
print(WER)
```
**Test Result**: 61.9550858652576
# BibTeX entry and citation info
* When publishing results based on these models please refer to:
```bibtex
@misc{mena2023whispertinyfaroese,
title={Acoustic Model in Faroese: whisper-tiny-faroese-8k-steps-100h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/whisper-tiny-faroese-8k-steps-100h},
year={2023}
}
```
# Acknowledgements
We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
Special thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf
|
carlosdanielhernandezmena/whisper-base-faroese-8k-steps-100h
|
carlosdanielhernandezmena
| 2023-10-23T22:45:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"faroese",
"whisper-base",
"ravnur-project",
"faroe-islands",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-12T02:21:50Z |
---
language: fo
datasets:
- carlosdanielhernandezmena/ravnursson_asr
tags:
- audio
- automatic-speech-recognition
- faroese
- whisper-base
- ravnur-project
- faroe-islands
license: cc-by-4.0
model-index:
- name: whisper-base-faroese-8k-steps-100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Test)
type: carlosdanielhernandezmena/ravnursson_asr
split: test
args:
language: fo
metrics:
- name: WER
type: wer
value: 22.87
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Dev)
type: carlosdanielhernandezmena/ravnursson_asr
split: validation
args:
language: fo
metrics:
- name: WER
type: wer
value: 20.70
---
# whisper-base-faroese-8k-steps-100h
**Paper:** [ASR Language Resources for Faroese](https://aclanthology.org/2023.nodalida-1.4.pdf)
The "whisper-base-faroese-8k-steps-100h" is an acoustic model suitable for Automatic Speech Recognition in Faroese. It is the result of fine-tuning the model "openai/whisper-base" with 100 hours of Faroese data released by the Ravnur Project (https://maltokni.fo/en/) from the Faroe Islands.
The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.
The fine-tuning process was perform during March (2023) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/whisper-base-faroese-8k-steps-100h"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda")
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("carlosdanielhernandezmena/ravnursson_asr",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def map_to_pred(batch):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
batch["reference"] = processor.tokenizer._normalize(batch['normalized_text'])
with torch.no_grad():
predicted_ids = model.generate(input_features.to("cuda"))[0]
transcription = processor.decode(predicted_ids)
batch["prediction"] = processor.tokenizer._normalize(transcription)
return batch
#Do the evaluation
result = ds.map(map_to_pred)
#Compute the overall WER now.
from evaluate import load
wer = load("wer")
WER=100 * wer.compute(references=result["reference"], predictions=result["prediction"])
print(WER)
```
**Test Result**: 22.878772482471295
# BibTeX entry and citation info
* When publishing results based on these models please refer to:
```bibtex
@misc{mena2023whisperbasefaroese,
title={Acoustic Model in Faroese: whisper-base-faroese-8k-steps-100h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/whisper-base-faroese-8k-steps-100h},
year={2023}
}
```
# Acknowledgements
We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
Special thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf
|
carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h
|
carlosdanielhernandezmena
| 2023-10-23T22:41:43Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"faroese",
"whisper-large",
"whisper-large-v1",
"ravnur-project",
"faroe-islands",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-27T01:27:39Z |
---
language: fo
datasets:
- carlosdanielhernandezmena/ravnursson_asr
tags:
- audio
- automatic-speech-recognition
- faroese
- whisper-large
- whisper-large-v1
- ravnur-project
- faroe-islands
license: cc-by-4.0
model-index:
- name: whisper-large-faroese-8k-steps-100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Test)
type: carlosdanielhernandezmena/ravnursson_asr
split: test
args:
language: fo
metrics:
- name: WER
type: wer
value: 6.889
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ravnursson (Dev)
type: carlosdanielhernandezmena/ravnursson_asr
split: validation
args:
language: fo
metrics:
- name: WER
type: wer
value: 5.054
---
# whisper-large-faroese-8k-steps-100h
**Paper:** [ASR Language Resources for Faroese](https://aclanthology.org/2023.nodalida-1.4.pdf)
The "whisper-large-faroese-8k-steps-100h" is an acoustic model suitable for Automatic Speech Recognition in Faroese. It is the result of fine-tuning the model "openai/whisper-large" with 100 hours of Faroese data released by the Ravnur Project (https://maltokni.fo/en/) from the Faroe Islands.
The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.
The fine-tuning process was perform during March (2023) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda")
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("carlosdanielhernandezmena/ravnursson_asr",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def map_to_pred(batch):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
batch["reference"] = processor.tokenizer._normalize(batch['normalized_text'])
with torch.no_grad():
predicted_ids = model.generate(input_features.to("cuda"))[0]
transcription = processor.decode(predicted_ids)
batch["prediction"] = processor.tokenizer._normalize(transcription)
return batch
#Do the evaluation
result = ds.map(map_to_pred)
#Compute the overall WER now.
from evaluate import load
wer = load("wer")
WER=100 * wer.compute(references=result["reference"], predictions=result["prediction"])
print(WER)
```
**Test Result**: 6.88978359335682
# BibTeX entry and citation info
* When publishing results based on these models please refer to:
```bibtex
@misc{mena2023whisperlargefaroese,
title={Acoustic Model in Faroese: whisper-large-faroese-8k-steps-100h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h},
year={2023}
}
```
# Acknowledgements
We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
Thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
|
SalmonAI123/ViMrcbase-TFIDF-version-1
|
SalmonAI123
| 2023-10-23T22:40:20Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:nguyenvulebinh/vi-mrc-base",
"base_model:finetune:nguyenvulebinh/vi-mrc-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-23T22:33:23Z |
---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/vi-mrc-base
tags:
- generated_from_trainer
model-index:
- name: ViMrcbase-TFIDF-version-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViMrcbase-TFIDF-version-1
This model is a fine-tuned version of [nguyenvulebinh/vi-mrc-base](https://huggingface.co/nguyenvulebinh/vi-mrc-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7798 | 1.0 | 615 | 0.7829 |
| 0.4745 | 2.0 | 1230 | 0.7692 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
wisdomik/QuiltNet-B-32
|
wisdomik
| 2023-10-23T22:36:31Z | 2,345 | 20 |
open_clip
|
[
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"language",
"histopathology",
"histology",
"medical",
"arxiv:2306.11207",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2023-06-19T17:42:04Z |
---
tags:
- zero-shot-image-classification
- clip
- vision
- language
- histopathology
- histology
- medical
library_tag: open_clip
license: mit
widget:
- src: >-
https://quilt1m.github.io/img/BREST092.jpg
candidate_labels: adipose tissue, debris tissue, lymphocytes tissue, mucus tissue, smooth muscle tissue, normal colon mucosa tissue, cancer-associated stroma tissue, colorectal adenocarcinoma epithelium tissue
example_title: Tissue phenotyping
- src: >-
https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
example_title: squamous cell carcinoma histopathology
- src: >-
https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
example_title: adenocarcinoma histopathology
pipeline_tag: zero-shot-image-classification
---
## QuiltNet-B-32 Description
[QuiltNet-B-32](https://github.com/wisdomikezogwo/quilt1m/) is a CLIP ViT-B/32 vision-language foundation model trained on the [Quilt-1M](https://quilt1m.github.io/) dataset curated from representative histopathology videos.
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
QuiltNet establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:

# Citation
```bibtex
@misc{ikezogwo2023quilt1m,
title={Quilt-1M: One Million Image-Text Pairs for Histopathology},
author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro},
year={2023},
eprint={2306.11207},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision histopathology models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the Quilt-1M dataset used in training of these models has additional considerations, see below.
## Training Data
This model was trained with [QUILT-1M](https://quilt1m.github.io/) is an image-text dataset for histopathology.
Curated from educational videos on Youtube QUILT-1M contributes the largest dataset for vision language modeling in histopathology.
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale histopathology datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes.
# Evaluation
Evaluation done with code in the [CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark) and results can be found in the paper on a list of varying histology tasks and datasets.
# Disclaimer
It is important to note that the results obtained from this function are not intended to constitute medical advice or replace consultation with a qualified medical professional. The use of this function is solely at your own risk and should be consistent with applicable laws, regulations, and ethical considerations. We do not warrant or guarantee the accuracy, completeness, suitability, or usefulness of this function for any particular purpose, and we hereby disclaim any liability arising from any reliance placed on this function or any results obtained from its use.
# Privacy
In accordance with the privacy policy of Youtube, only Video IDs data is redistributed by us.
It is strictly prohibited to redistribute any content apart from the Video IDs.
Any distribution carried out must adhere to the laws and regulations applicable in your jurisdiction, including export control laws and embargoes.'
|
raghavsharma06/keras-dummy-functional-demo
|
raghavsharma06
| 2023-10-23T22:33:47Z | 4 | 1 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-10-23T01:18:36Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
konstantindobler/xlm-roberta-base-focus-extend-hausa
|
konstantindobler
| 2023-10-23T22:29:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"ha",
"dataset:cc100",
"arxiv:2305.14481",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-23T22:20:07Z |
---
language: ha
license: mit
datasets: cc100
---
# xlm-roberta-base-focus-extend-hausa
XLM-R adapted to Hausa using "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models".
Code: https://github.com/konstantinjdobler/focus
Paper: https://arxiv.org/abs/2305.14481
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-hausa")
model = AutoModelForMaskedLM.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-hausa")
# Use model and tokenizer as usual
```
## Details
The model is based on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was adapted to Hausa.
The original multilingual tokenizer was extended with the top 30k tokens of a language-specific Hausa tokenizer. The new embeddings were initialized with FOCUS.
The model was then trained on data from CC100 for 390k optimizer steps. More details and hyperparameters can be found [in the paper](https://arxiv.org/abs/2305.14481).
## Disclaimer
The web-scale dataset used for pretraining and tokenizer training (CC100) might contain personal and sensitive information.
Such behavior needs to be assessed carefully before any real-world deployment of the models.
## Citation
Please cite FOCUS as follows:
```bibtex
@misc{dobler-demelo-2023-focus,
title={FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models},
author={Konstantin Dobler and Gerard de Melo},
year={2023},
eprint={2305.14481},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
JEdappully/Reinforce-01
|
JEdappully
| 2023-10-23T22:09:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-23T22:09:01Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jimigoodmojo/mistral-finetuned-samsum
|
jimigoodmojo
| 2023-10-23T22:08:59Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-23T21:18:58Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
LoneStriker/dolphin-2.1-70b-4.0bpw-h6-exl2
|
LoneStriker
| 2023-10-23T22:02:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-23T22:00:03Z |
---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
license: llama2
---
Dolphin 2.1 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.1-70b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on llama2, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 5 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
```
<|im_start|>system
You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
<|im_start|>user
Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
<|im_start|>assistant
To increase the performance of a 2010 Honda Civic without breaking the bank, you can consider these modifications:
1. Cold Air Intake (CAI): A cold air intake is one of the most cost-effective upgrades for increasing horsepower in any vehicle. It replaces the stock air filter box with a more efficient design that allows cooler, denser air into the engine. This results in better combustion and increased power output. Prices range from $50-$300 depending on brand and quality.
2. Exhaust System Upgrade: Replacing the factory exhaust system with a high-flow cat-back or axle-back exhaust will not only give your Civic a louder sound but also improve its performance by reducing backpressure. Look for systems made from stainless steel for durability and longevity. Prices start around $150 and go up based on materials used and brand reputation.
3. Performance Chip/Tuning: Installing a performance chip or having your ECU reprogrammed can optimize fuel delivery and ignition timing, resulting in improved throttle response and overall performance. Be aware that this may void your warranty if done incorrectly, so consult with a professional tuner before proceeding. Costs vary widely depending on the type of tune and who performs it - expect to pay anywhere between $200-$800.
4. Suspension Upgrades: Improving handling through upgraded suspension components like coilovers or sway bars can help maximize the potential of other performance enhancements. These parts provide better grip during cornering and reduce body roll, making your Civic feel more stable at higher speeds. Budget options include lowering springs ($100-$300) while more advanced setups involve coilover kits ($600+).
Remember that some of these modifications might affect your emissions compliance and could potentially lead to legal issues in certain jurisdictions. Always check local laws before modifying your vehicle. Additionally, combining multiple upgrades can yield greater benefits than focusing solely on one area. Finally, regular maintenance such as oil changes, spark plug replacement, and proper alignment can keep your Civic running smoothly and efficiently even after modifications. Happy modding!
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
weilc/q-FrozenLake-v1-4x4-noSlippery
|
weilc
| 2023-10-23T22:00:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-23T19:19:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="weilc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/kashima_kantaicollection
|
LarryAIDraw
| 2023-10-23T21:56:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-17T16:05:13Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/131259/kashima-kantai-collection
|
konstantindobler/xlm-roberta-base-focus-extend-arabic
|
konstantindobler
| 2023-10-23T21:54:21Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"ar",
"dataset:cc100",
"arxiv:2305.14481",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-23T21:45:08Z |
---
language: ar
license: mit
datasets: cc100
---
# xlm-roberta-base-focus-extend-arabic
XLM-R adapted to Arabic using "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models".
Code: https://github.com/konstantinjdobler/focus
Paper: https://arxiv.org/abs/2305.14481
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-arabic")
model = AutoModelForMaskedLM.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-arabic")
# Use model and tokenizer as usual
```
## Details
The model is based on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was adapted to Arabic.
The original multilingual tokenizer was extended with the top 30k tokens of a language-specific Arabic tokenizer. The new embeddings were initialized with FOCUS.
The model was then trained on data from CC100 for 390k optimizer steps. More details and hyperparameters can be found [in the paper](https://arxiv.org/abs/2305.14481).
## Disclaimer
The web-scale dataset used for pretraining and tokenizer training (CC100) might contain personal and sensitive information.
Such behavior needs to be assessed carefully before any real-world deployment of the models.
## Citation
Please cite FOCUS as follows:
```bibtex
@misc{dobler-demelo-2023-focus,
title={FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models},
author={Konstantin Dobler and Gerard de Melo},
year={2023},
eprint={2305.14481},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tp05/storyGen
|
tp05
| 2023-10-23T21:52:24Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:tp05/shortStories",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-10-23T21:38:17Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: storyGen
results: []
datasets:
- tp05/shortStories
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# storyGen
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
LarryAIDraw/Eminence_in_Shadow
|
LarryAIDraw
| 2023-10-23T21:43:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-23T21:39:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/173012/the-eminence-in-shadow-characterpack
|
LarryAIDraw/sakurajima_mai
|
LarryAIDraw
| 2023-10-23T21:38:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-23T21:35:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/172716/sakurajima-mai-oror
|
LarryAIDraw/AnnaNishikinomiya-10
|
LarryAIDraw
| 2023-10-23T21:37:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-23T21:34:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/171924/anna-nishikinomiya-shimoneta-lora-nsfw-support
|
SkunkworksAI/BakLLaVA-1
|
SkunkworksAI
| 2023-10-23T21:26:30Z | 78 | 379 |
transformers
|
[
"transformers",
"pytorch",
"llava_mistral",
"text-generation",
"en",
"dataset:SkunkworksAI/BakLLaVA-1-FT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-12T13:12:21Z |
---
datasets:
- SkunkworksAI/BakLLaVA-1-FT
language:
- en
license: apache-2.0
---
<p><h1> BakLLaVA-1 </h1></p>
Thank you to our compute sponsors Together Compute (www.together.ai).
In collaboration with **Ontocord** (www.ontocord.ai) and **LAION** (www.laion.ai).

BakLLaVA 1 is a Mistral 7B base augmented with the LLaVA 1.5 architecture. In this first version, we showcase that a Mistral 7B base outperforms Llama 2 13B on several benchmarks.
You can run BakLLaVA-1 on our repo. We are currently updating it to make it easier for you to finetune and inference. (https://github.com/SkunkworksAI/BakLLaVA).
Note: BakLLaVA-1 is fully open-source but was trained on certain data that includes LLaVA's corpus which is not commercially permissive. We will fix this in the upcoming release.
BakLLaVA 2 is cooking with a significantly larger (commercially viable) dataset and a novel architecture that expands beyond the current LLaVA method. BakLLaVA-2 will do away with the restrictions of BakLLaVA-1.
# Evaluations

# Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
- Additional private data (permissive)
|
LoneStriker/dolphin-2.1-70b-2.4bpw-h6-exl2
|
LoneStriker
| 2023-10-23T21:14:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-23T21:12:38Z |
---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
license: llama2
---
Dolphin 2.1 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.1-70b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on llama2, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 5 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
```
<|im_start|>system
You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
<|im_start|>user
Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
<|im_start|>assistant
To increase the performance of a 2010 Honda Civic without breaking the bank, you can consider these modifications:
1. Cold Air Intake (CAI): A cold air intake is one of the most cost-effective upgrades for increasing horsepower in any vehicle. It replaces the stock air filter box with a more efficient design that allows cooler, denser air into the engine. This results in better combustion and increased power output. Prices range from $50-$300 depending on brand and quality.
2. Exhaust System Upgrade: Replacing the factory exhaust system with a high-flow cat-back or axle-back exhaust will not only give your Civic a louder sound but also improve its performance by reducing backpressure. Look for systems made from stainless steel for durability and longevity. Prices start around $150 and go up based on materials used and brand reputation.
3. Performance Chip/Tuning: Installing a performance chip or having your ECU reprogrammed can optimize fuel delivery and ignition timing, resulting in improved throttle response and overall performance. Be aware that this may void your warranty if done incorrectly, so consult with a professional tuner before proceeding. Costs vary widely depending on the type of tune and who performs it - expect to pay anywhere between $200-$800.
4. Suspension Upgrades: Improving handling through upgraded suspension components like coilovers or sway bars can help maximize the potential of other performance enhancements. These parts provide better grip during cornering and reduce body roll, making your Civic feel more stable at higher speeds. Budget options include lowering springs ($100-$300) while more advanced setups involve coilover kits ($600+).
Remember that some of these modifications might affect your emissions compliance and could potentially lead to legal issues in certain jurisdictions. Always check local laws before modifying your vehicle. Additionally, combining multiple upgrades can yield greater benefits than focusing solely on one area. Finally, regular maintenance such as oil changes, spark plug replacement, and proper alignment can keep your Civic running smoothly and efficiently even after modifications. Happy modding!
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h
|
language-and-voice-lab
| 2023-10-23T21:08:28Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"icelandic",
"whisper-large",
"iceland",
"reykjavik",
"samromur",
"is",
"dataset:language-and-voice-lab/samromur_asr",
"dataset:language-and-voice-lab/samromur_children",
"dataset:language-and-voice-lab/malromur_asr",
"dataset:language-and-voice-lab/althingi_asr",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-06T15:57:03Z |
---
language: is
datasets:
- language-and-voice-lab/samromur_asr
- language-and-voice-lab/samromur_children
- language-and-voice-lab/malromur_asr
- language-and-voice-lab/althingi_asr
tags:
- audio
- automatic-speech-recognition
- icelandic
- whisper
- whisper-large
- iceland
- reykjavik
- samromur
license: cc-by-4.0
model-index:
- name: whisper-large-icelandic-30k-steps-1000h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Test)
type: language-and-voice-lab/samromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 8.479
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur (Dev)
type: language-and-voice-lab/samromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 7.299
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Test)
type: language-and-voice-lab/samromur_children
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 7.743
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Samrómur Children (Dev)
type: language-and-voice-lab/samromur_children
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 4.591
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Test)
type: language-and-voice-lab/malromur_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 5.110
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Malrómur (Dev)
type: language-and-voice-lab/malromur_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 5.286
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Test)
type: language-and-voice-lab/althingi_asr
split: test
args:
language: is
metrics:
- name: WER
type: wer
value: 8.250
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Althingi (Dev)
type: language-and-voice-lab/althingi_asr
split: validation
args:
language: is
metrics:
- name: WER
type: wer
value: 7.998
---
# whisper-large-icelandic-30k-steps-1000h
The "whisper-large-icelandic-30k-steps-1000h" is an acoustic model suitable for Automatic Speech Recognition in Icelandic. It is the result of fine-tuning the model "openai/whisper-large" for 30,000 steps with around 1000 hours of Icelandic data developed by the [Language and Voice Laboratory](https://huggingface.co/language-and-voice-lab). Most of the data is available at public repositories such as [LDC](https://www.ldc.upenn.edu/), [OpenSLR](https://openslr.org/) or [Clarin.is](https://clarin.is/)
The specific list of corpora used to fine-tune the model is:
- [Samrómur 21.05 (114h34m)](http://www.openslr.org/112/)
- [Samrómur Children (127h25m)](https://catalog.ldc.upenn.edu/LDC2022S11)
- [Malrómur (119hh03m)](https://clarin.is/en/resources/malromur/)
- [Althingi Parliamentary Speech (514h29m)](https://catalog.ldc.upenn.edu/LDC2021S01)
- L2-Speakers Data (125h55m) **Unpublished material**
The fine-tuning process was performed during April (2023) in the servers of the Language and Voice Laboratory (https://lvl.ru.is/) at Reykjavík University (Iceland) by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena).
# Evaluation
```python
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor
#Load the processor and model.
MODEL_NAME="language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h"
processor = WhisperProcessor.from_pretrained(MODEL_NAME)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to("cuda")
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("language-and-voice-lab/samromur_children",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def map_to_pred(batch):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
batch["reference"] = processor.tokenizer._normalize(batch['normalized_text'])
with torch.no_grad():
predicted_ids = model.generate(input_features.to("cuda"))[0]
transcription = processor.decode(predicted_ids)
batch["prediction"] = processor.tokenizer._normalize(transcription)
return batch
#Do the evaluation
result = ds.map(map_to_pred)
#Compute the overall WER now.
from evaluate import load
wer = load("wer")
WER=100 * wer.compute(references=result["reference"], predictions=result["prediction"])
print(WER)
```
**Test Result**: 7.743795695602924
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2023whisperlarge30kicelandic,
title={Acoustic Model in Icelandic: whisper-large-icelandic-30k-steps-1000h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h},
year={2023}
}
```
# Acknowledgements
Thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible.
We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture. This model is an unexpected result of all the resources gathered by the Programme.
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
|
carlosdanielhernandezmena/wav2vec2-large-xlsr-53-spanish-ep5-944h
|
carlosdanielhernandezmena
| 2023-10-23T21:04:43Z | 801 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"spanish",
"xlrs-53-spanish",
"ciempiess",
"cimpiess-unam",
"es",
"dataset:ciempiess/ciempiess_light",
"dataset:ciempiess/ciempiess_balance",
"dataset:ciempiess/ciempiess_fem",
"dataset:common_voice",
"dataset:hub4ne_es_LDC98S74",
"dataset:callhome_es_LDC96S35",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-01T08:19:03Z |
---
language: es
datasets:
- ciempiess/ciempiess_light
- ciempiess/ciempiess_balance
- ciempiess/ciempiess_fem
- common_voice
- hub4ne_es_LDC98S74
- callhome_es_LDC96S35
tags:
- audio
- automatic-speech-recognition
- spanish
- xlrs-53-spanish
- ciempiess
- cimpiess-unam
license: cc-by-4.0
model-index:
- name: wav2vec2-large-xlsr-53-spanish-ep5-944h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Test)
type: mozilla-foundation/common_voice_10_0
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 9.20
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Dev)
type: mozilla-foundation/common_voice_10_0
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 8.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CIEMPIESS-TEST
type: ciempiess/ciempiess_test
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 11.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 1997 Spanish Broadcast News Speech (HUB4-NE)
type: HUB4NE_LDC98S74
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 7.48
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Test)
type: callhome_LDC96S35
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 39.12
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Dev)
type: callhome_LDC96S35
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 40.39
---
# wav2vec2-large-xlsr-53-spanish-ep5-944h
**Paper:** [The state of end-to-end systems for Mexican Spanish speech recognition](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/viewFile/6485/3892)
The "wav2vec2-large-xlsr-53-spanish-ep5-944h" is an acoustic model suitable for Automatic Speech Recognition in Spanish. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" for 5 epochs with around 944 hours of Spanish data gathered or developed by the [CIEMPIESS-UNAM Project](https://huggingface.co/ciempiess) since 2012. Most of the data is available at the the CIEMPIESS-UNAM Project homepage http://www.ciempiess.org/. The rest can be found in public repositories such as [LDC](https://www.ldc.upenn.edu/) or [OpenSLR](https://openslr.org/)
The specific list of corpora used to fine-tune the model is:
- [CIEMPIESS-LIGHT (18h25m)](https://catalog.ldc.upenn.edu/LDC2017S23)
- [CIEMPIESS-BALANCE (18h20m)](https://catalog.ldc.upenn.edu/LDC2018S11)
- [CIEMPIESS-FEM (13h54m)](https://catalog.ldc.upenn.edu/LDC2019S07)
- [CHM150 (1h38m)](https://catalog.ldc.upenn.edu/LDC2016S04)
- [TEDX_SPANISH (24h29m)](https://openslr.org/67/)
- [LIBRIVOX_SPANISH (73h01m)](https://catalog.ldc.upenn.edu/LDC2020S01)
- [WIKIPEDIA_SPANISH (25h37m)](https://catalog.ldc.upenn.edu/LDC2021S07)
- [VOXFORGE_SPANISH (49h42m)](http://www.voxforge.org/es)
- [MOZILLA COMMON VOICE 10.0 (320h22m)](https://commonvoice.mozilla.org/es)
- [HEROICO (16h33m)](https://catalog.ldc.upenn.edu/LDC2006S37)
- [LATINO-40 (6h48m)](https://catalog.ldc.upenn.edu/LDC95S28)
- [CALLHOME_SPANISH (13h22m)](https://catalog.ldc.upenn.edu/LDC96S35)
- [HUB4NE_SPANISH (31h41m)](https://catalog.ldc.upenn.edu/LDC98S74)
- [FISHER_SPANISH (127h22m)](https://catalog.ldc.upenn.edu/LDC2010S01)
- [Chilean Spanish speech data set (7h08m)](https://openslr.org/71/)
- [Colombian Spanish speech data set (7h34m)](https://openslr.org/72/)
- [Peruvian Spanish speech data set (9h13m)](https://openslr.org/73/)
- [Argentinian Spanish speech data set (8h01m)](https://openslr.org/61/)
- [Puerto Rico Spanish speech data set (1h00m)](https://openslr.org/74/)
- [MediaSpeech Spanish (10h00m)](https://openslr.org/108/)
- [DIMEX100-LIGHT (6h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [DIMEX100-NIÑOS (08h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GOLEM-UNIVERSUM (00h10m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GLISSANDO (6h40m)](https://glissando.labfon.uned.es/es)
- TELE_con_CIENCIA (28h16m) **Unplished Material**
- UNSHAREABLE MATERIAL (118h22m) **Not available for sharing**
The fine-tuning process was performed during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2ForCTC
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-spanish-ep5-944h"
processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("ciempiess/ciempiess_test", split="test")
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def prepare_dataset(batch):
audio = batch["audio"]
#Batched output is "un-batched" to ensure mapping is correct
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
with processor.as_target_processor():
batch["labels"] = processor(batch["normalized_text"]).input_ids
return batch
ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)
#Define the evaluation metric
import numpy as np
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
#We do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
#Do the evaluation (with batch_size=1)
model = model.to(torch.device("cuda"))
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
return batch
results = ds.map(map_to_result,remove_columns=ds.column_names)
#Compute the overall WER now.
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))
```
**Test Result**: 0.112
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2022xlrs53spanish,
title={Acoustic Model in Spanish: wav2vec2-large-xlsr-53-spanish-ep5-944h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-spanish-ep5-944h},
year={2022}
}
```
# Acknowledgements
The author wants to thank to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) at the [Facultad de Ingeniería (FI)](https://www.ingenieria.unam.mx/) of the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/). He also thanks to the social service students for all the hard work.
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. The author also thanks to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h-ct2
|
carlosdanielhernandezmena
| 2023-10-23T21:03:32Z | 4 | 0 |
transformers
|
[
"transformers",
"audio",
"automatic-speech-recognition",
"faroese",
"faster-whisper",
"whisper-large",
"whisper-large-v1",
"faroe-islands",
"ravnur-project",
"ravnursson",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-15T17:59:30Z |
---
license: cc-by-4.0
language:
- fo
datasets:
- carlosdanielhernandezmena/ravnursson_asr
tags:
- audio
- automatic-speech-recognition
- faroese
- faster-whisper
- whisper-large
- whisper-large-v1
- faroe-islands
- ravnur-project
- ravnursson
---
# whisper-large-faroese-8k-steps-100h-ct2
This is a faster-whisper version of [carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h](https://huggingface.co/carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h).
The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.
The model was created like described in [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master):
```bash
ct2-transformers-converter --model carlosdanielhernandezmena/whisper-large-faroese-8k-steps-100h \
--output_dir whisper-large-faroese-8k-steps-100h-ct2 \
--quantization float16
```
# Usage
```python
from faster_whisper import WhisperModel
model_size = "whisper-large-faroese-8k-steps-100h-ct2"
# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")
# or run on GPU with INT8
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", beam_size=5)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
# BibTeX entry and citation info
* When publishing results based on these models please refer to:
```bibtex
@misc{mena2023whisperlargefaroesect2,
title={Acoustic Model in Faroese: whisper-large-faroese-8k-steps-100h-ct2.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h-ct2},
year={2023}
}
```
# Acknowledgements
We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
Thanks to Annika Simonsen and to The Ravnur Project for making their "Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf
Special thanks to Björn Ingi Stefánsson for setting up the configuration of the server where this model was trained.
|
carlosdanielhernandezmena/stt_mt_quartznet15x5_sp_ep255_64h
|
carlosdanielhernandezmena
| 2023-10-23T21:01:00Z | 8 | 0 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"pytorch",
"NeMo",
"QuartzNet",
"QuartzNet15x5",
"maltese",
"mt",
"dataset:common_voice",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-11-11T02:36:51Z |
---
language:
- mt
library_name: nemo
datasets:
- common_voice
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- pytorch
- NeMo
- QuartzNet
- QuartzNet15x5
- maltese
license: cc-by-nc-sa-4.0
model-index:
- name: stt_mt_quartznet15x5_sp_ep255_64h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 11.0 (Test)
type: mozilla-foundation/common_voice_11_0
split: test
args:
language: mt
metrics:
- name: WER
type: wer
value: 5
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 11.0 (Dev)
type: mozilla-foundation/common_voice_11_0
split: validation
args:
language: mt
metrics:
- name: WER
type: wer
value: 4.89
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MASRI-TEST Corpus
type: MLRS/masri_test
split: test
args:
language: mt
metrics:
- name: WER
type: wer
value: 45.2
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MASRI-DEV Corpus
type: MLRS/masri_dev
split: validation
args:
language: mt
metrics:
- name: WER
type: wer
value: 43.08
---
# stt_mt_quartznet15x5_sp_ep255_64h
**NOTE! This model was trained with the NeMo version: nemo-toolkit==1.10.0**
The "stt_mt_quartznet15x5_sp_ep255_64h" is an acoustic model created with NeMo which is suitable for Automatic Speech Recognition in Maltese.
It is the result of fine-tuning the model ["QuartzNet15x5Base-En.nemo"](https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/files) with around 64 hours of Maltese data developed by the MASRI Project at the University of Malta between 2019 and 2021. The 64 hours of data were augmented using speed perturbation at speed rates of 0.9 and 1.1. Most of the data is available at the the MASRI Project homepage https://www.um.edu.mt/projects/masri/.
The specific list of corpora used to fine-tune the model is:
- MASRI-HEADSET v2 (6h39m)
- MASRI-Farfield (9h37m)
- MASRI-Booths (2h27m)
- MASRI-MEP (1h17m)
- MASRI-COMVO (7h29m)
- MASRI-TUBE (13h17m)
- MASRI-MERLIN (25h18m) *Not available at the MASRI Project homepage
The fine-tuning process was perform during October (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
```bibtex
@misc{mena2022quartznet15x5maltese,
title={Acoustic Model in Maltese: stt\_mt\_quartznet15x5\_sp\_ep255\_64h.},
author={Hernandez Mena, Carlos Daniel},
url={https://huggingface.co/carlosdanielhernandezmena/stt_mt_quartznet15x5_sp_ep255_64h},
year={2022}
}
```
# Acknowledgements
The MASRI Project is funded by the University of Malta Research Fund Awards. We want to thank to Merlin Publishers (Malta) for provinding the audiobooks used to create the MASRI-MERLIN Corpus.
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.