modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
connectivity/bert_ft_qqp-68 | d37e8e6bb16b99780723d84337483bd680bec84c | 2022-05-21T16:36:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-68 | 4 | null | transformers | 19,900 | Entry not found |
connectivity/bert_ft_qqp-69 | 69ae44f0a6045a008d0408812da21aae62af916f | 2022-05-21T16:36:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-69 | 4 | null | transformers | 19,901 | Entry not found |
connectivity/bert_ft_qqp-71 | 707140f2abb70c1570785550cd5cd3d024a0fd25 | 2022-05-21T16:36:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-71 | 4 | null | transformers | 19,902 | Entry not found |
connectivity/bert_ft_qqp-72 | 632ae0bbd486baf765faa29200993d5d0b3d725d | 2022-05-21T16:36:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-72 | 4 | null | transformers | 19,903 | Entry not found |
connectivity/bert_ft_qqp-73 | 5ea448df9522b83438f9ef1bd018b5ded296a2a1 | 2022-05-21T16:36:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-73 | 4 | null | transformers | 19,904 | Entry not found |
connectivity/cola_6ep_ft-1 | 31349a7432f85589807b46d90510e5a68f99afaf | 2022-05-21T16:43:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-1 | 4 | null | transformers | 19,905 | Entry not found |
connectivity/bert_ft_qqp-74 | 3bb894ac2f88ee1cb9f5a770fdf601d5717be4f8 | 2022-05-21T16:36:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-74 | 4 | null | transformers | 19,906 | Entry not found |
connectivity/cola_6ep_ft-2 | 98442596137833c6849aacdd82df4efe24da0363 | 2022-05-21T16:43:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-2 | 4 | null | transformers | 19,907 | Entry not found |
connectivity/cola_6ep_ft-3 | c4508da4ff5159fa260ee15cd0c8ed899bb4240c | 2022-05-21T16:43:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-3 | 4 | null | transformers | 19,908 | Entry not found |
connectivity/cola_6ep_ft-4 | 979373e908ec43c3b3d4c155e04fd74e3419cc68 | 2022-05-21T16:43:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-4 | 4 | null | transformers | 19,909 | Entry not found |
connectivity/bert_ft_qqp-75 | b1ba916a216627dd29edbadf93d6246dc29e77a1 | 2022-05-21T16:36:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-75 | 4 | null | transformers | 19,910 | Entry not found |
connectivity/cola_6ep_ft-5 | ece4be0f3c8d6a233841583654a8860f31aa70f9 | 2022-05-21T16:43:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-5 | 4 | null | transformers | 19,911 | Entry not found |
connectivity/cola_6ep_ft-6 | 0a8243f1a40daf664335e1261637fb09255c8f90 | 2022-05-21T16:43:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-6 | 4 | null | transformers | 19,912 | Entry not found |
connectivity/cola_6ep_ft-7 | 262cd8a60d76766626cf85ed927512461a7f89fa | 2022-05-21T16:43:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-7 | 4 | null | transformers | 19,913 | Entry not found |
connectivity/bert_ft_qqp-76 | b05217380a1032df2fcdc3225dfe1f50573676da | 2022-05-21T16:36:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-76 | 4 | null | transformers | 19,914 | Entry not found |
connectivity/cola_6ep_ft-8 | de3814defa93a41e9b4a8d7ed46d1bcd1d86b94c | 2022-05-21T16:43:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-8 | 4 | null | transformers | 19,915 | Entry not found |
connectivity/cola_6ep_ft-9 | 95be824bd118dd89b0d73778aea6c5b258b36bda | 2022-05-21T16:43:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-9 | 4 | null | transformers | 19,916 | Entry not found |
connectivity/cola_6ep_ft-10 | 03459c23bf46787327e666661c7448df8b2526e0 | 2022-05-21T16:43:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-10 | 4 | null | transformers | 19,917 | Entry not found |
connectivity/cola_6ep_ft-11 | 881a45d477cd283f71c624989c784cf898a93f34 | 2022-05-21T16:43:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-11 | 4 | null | transformers | 19,918 | Entry not found |
connectivity/cola_6ep_ft-12 | 4ffa69d9d4da3367f3935d998b1ae8f17ff69644 | 2022-05-21T16:43:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-12 | 4 | null | transformers | 19,919 | Entry not found |
connectivity/cola_6ep_ft-13 | 6db54919791c2a3c385df69f5ee76b6f99dcc3da | 2022-05-21T16:43:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-13 | 4 | null | transformers | 19,920 | Entry not found |
connectivity/cola_6ep_ft-14 | 7f21aeef7fcaec780e0a878fec03cc386948f071 | 2022-05-21T16:43:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-14 | 4 | null | transformers | 19,921 | Entry not found |
connectivity/cola_6ep_ft-15 | 3f6f3cbea5927dd03337c461d7a6b9e4a8ba152a | 2022-05-21T16:43:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-15 | 4 | null | transformers | 19,922 | Entry not found |
connectivity/bert_ft_qqp-77 | b77b5b44a957d94c5d4705eb2e08ac01fa8c63f0 | 2022-05-21T16:36:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-77 | 4 | null | transformers | 19,923 | Entry not found |
connectivity/cola_6ep_ft-16 | 4b977ac88df3254ccae2be13235a110516245461 | 2022-05-21T16:43:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-16 | 4 | null | transformers | 19,924 | Entry not found |
connectivity/cola_6ep_ft-17 | e4f45bec04a9feaa35007a8828b6b34edef2f3d1 | 2022-05-21T16:43:42.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-17 | 4 | null | transformers | 19,925 | Entry not found |
connectivity/cola_6ep_ft-18 | ab9d16fddb7e2734f2dcb3833c7f2e0cb544bf37 | 2022-05-21T16:43:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-18 | 4 | null | transformers | 19,926 | Entry not found |
connectivity/cola_6ep_ft-19 | fb35e587400d9a0db28c0b6dbcd76a0de718825b | 2022-05-21T16:43:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-19 | 4 | null | transformers | 19,927 | Entry not found |
connectivity/cola_6ep_ft-20 | 4bcd65862a371a67e34c62d77c2cab8acc4a6588 | 2022-05-21T16:43:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-20 | 4 | null | transformers | 19,928 | Entry not found |
connectivity/cola_6ep_ft-21 | 3ddb842dc4cd4706ffdf75447bcea8b493c3d661 | 2022-05-21T16:43:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-21 | 4 | null | transformers | 19,929 | Entry not found |
connectivity/cola_6ep_ft-22 | 292e76fed3f93a18e1b051edc5dafabc5a6c6af9 | 2022-05-21T16:43:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-22 | 4 | null | transformers | 19,930 | Entry not found |
connectivity/cola_6ep_ft-23 | bdea85c7220d3fa93520cec4400b7f75c142d210 | 2022-05-21T16:43:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-23 | 4 | null | transformers | 19,931 | Entry not found |
connectivity/bert_ft_qqp-78 | 4a454f4fa7f1705a87f19b116459c539db9c8cb3 | 2022-05-21T16:36:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-78 | 4 | null | transformers | 19,932 | Entry not found |
connectivity/cola_6ep_ft-24 | b42a75b3d57fad3f7594aa266ff95338359d8aed | 2022-05-21T16:43:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-24 | 4 | null | transformers | 19,933 | Entry not found |
connectivity/cola_6ep_ft-25 | 177a6cad0477b2aaa6adeac03c60e2e7e8710a2f | 2022-05-21T16:43:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-25 | 4 | null | transformers | 19,934 | Entry not found |
connectivity/cola_6ep_ft-26 | dd52fb97ae208a4ff1d5525e7fd00c4ffba20f93 | 2022-05-21T16:43:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-26 | 4 | null | transformers | 19,935 | Entry not found |
connectivity/cola_6ep_ft-27 | 7ab29204f644e011c37aba96f8e44d5aafe12323 | 2022-05-21T16:43:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-27 | 4 | null | transformers | 19,936 | Entry not found |
connectivity/cola_6ep_ft-28 | 23ad89853315e85578d1b4d9b1b86283daf8798b | 2022-05-21T16:43:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-28 | 4 | null | transformers | 19,937 | Entry not found |
connectivity/cola_6ep_ft-29 | 6e794d5400fec939e15d523cef4f602174a1ab97 | 2022-05-21T16:43:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-29 | 4 | null | transformers | 19,938 | Entry not found |
connectivity/cola_6ep_ft-30 | 45fdd87ec23d926d6cf35d1dbb0c3a1bdf637a33 | 2022-05-21T16:43:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-30 | 4 | null | transformers | 19,939 | Entry not found |
connectivity/cola_6ep_ft-31 | 84b542808b342b9da42d9a822eeda686afb6e1f0 | 2022-05-21T16:43:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-31 | 4 | null | transformers | 19,940 | Entry not found |
connectivity/bert_ft_qqp-79 | f2397ebe0e50b1269bcf851fe99de7dfab5c0162 | 2022-05-21T16:37:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-79 | 4 | null | transformers | 19,941 | Entry not found |
connectivity/cola_6ep_ft-32 | 66ffb4febac9afce847a74544e80de31ef9680a8 | 2022-05-21T16:43:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-32 | 4 | null | transformers | 19,942 | Entry not found |
connectivity/cola_6ep_ft-33 | 652ab62e39faa7573c66e0eed67c32a9e7d8fdff | 2022-05-21T16:43:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-33 | 4 | null | transformers | 19,943 | Entry not found |
connectivity/cola_6ep_ft-34 | 825ed41b6fe0b417f02877863699cc109f048ceb | 2022-05-21T16:43:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-34 | 4 | null | transformers | 19,944 | Entry not found |
connectivity/cola_6ep_ft-35 | c04d2f627a4cb708ed0c8f7dfd6bc4d25b8a7192 | 2022-05-21T16:43:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-35 | 4 | null | transformers | 19,945 | Entry not found |
connectivity/cola_6ep_ft-36 | 4ea15fa120e9dc4d4e36effd04326198969770ab | 2022-05-21T16:43:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-36 | 4 | null | transformers | 19,946 | Entry not found |
connectivity/cola_6ep_ft-37 | 7faafcc6c7c3ff0307542ce8585a30586c93e241 | 2022-05-21T16:43:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/cola_6ep_ft-37 | 4 | null | transformers | 19,947 | Entry not found |
connectivity/bert_ft_qqp-82 | 346c15284ff0cb769f3ce78fcfa21d6b27b24c57 | 2022-05-21T16:37:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-82 | 4 | null | transformers | 19,948 | Entry not found |
connectivity/bert_ft_qqp-84 | 1aaa1167e3f5404418c42a543f6fefb65e5d4019 | 2022-05-21T16:37:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-84 | 4 | null | transformers | 19,949 | Entry not found |
connectivity/bert_ft_qqp-87 | 42ae00e9c194b9264cdc5acf39e8248cb070527a | 2022-05-21T16:37:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-87 | 4 | null | transformers | 19,950 | Entry not found |
connectivity/bert_ft_qqp-88 | 387c2268033c48d2584e923ff641e9fd23044c24 | 2022-05-21T16:37:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-88 | 4 | null | transformers | 19,951 | Entry not found |
connectivity/bert_ft_qqp-89 | a6fe58bf040cb0b933af2b9224e6c2644d518f48 | 2022-05-21T16:37:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-89 | 4 | null | transformers | 19,952 | Entry not found |
connectivity/bert_ft_qqp-91 | 1a0ad1c6b61368a290aedb8e4940f024dbb15cb2 | 2022-05-21T16:37:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-91 | 4 | null | transformers | 19,953 | Entry not found |
connectivity/bert_ft_qqp-92 | feed54822c9efc5c08b8561ce68467bed9c35fb4 | 2022-05-21T16:38:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-92 | 4 | null | transformers | 19,954 | Entry not found |
connectivity/bert_ft_qqp-93 | 025a9dff8ac16ee714601f5d85d769805cec6e49 | 2022-05-21T16:38:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-93 | 4 | null | transformers | 19,955 | Entry not found |
connectivity/bert_ft_qqp-94 | f99c09f684912695b59b882cabbef21afe25fb4b | 2022-05-21T16:38:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-94 | 4 | null | transformers | 19,956 | Entry not found |
connectivity/bert_ft_qqp-95 | a57489a428a6db08c3e8109753eca8d95a8bb1c7 | 2022-05-21T16:38:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-95 | 4 | null | transformers | 19,957 | Entry not found |
connectivity/bert_ft_qqp-99 | 63e80d8b70cb6d574cc7ca84c1b774e1728ee090 | 2022-05-21T16:38:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | connectivity | null | connectivity/bert_ft_qqp-99 | 4 | null | transformers | 19,958 | Entry not found |
Dizzykong/gpt2-large-final | 05d574a9153b4544ba85081041f538b41c87d65a | 2022-05-23T06:39:14.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-large-final | 4 | null | transformers | 19,959 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-final
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
brever/wav2vec2-base-demo-colab | b2be6d82e74320c3120cc72577a9943ee61218aa | 2022-05-22T04:56:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | brever | null | brever/wav2vec2-base-demo-colab | 4 | null | transformers | 19,960 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3944
- Wer: 0.3142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4086 | 3.45 | 500 | 1.1494 | 0.8509 |
| 0.5968 | 6.9 | 1000 | 0.4306 | 0.4169 |
| 0.2363 | 10.34 | 1500 | 0.3820 | 0.3669 |
| 0.1365 | 13.79 | 2000 | 0.3863 | 0.3487 |
| 0.0916 | 17.24 | 2500 | 0.3851 | 0.3391 |
| 0.0704 | 20.69 | 3000 | 0.3759 | 0.3271 |
| 0.0537 | 24.14 | 3500 | 0.3747 | 0.3222 |
| 0.0413 | 27.59 | 4000 | 0.3944 | 0.3142 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.10.3
|
eslamxm/mt5-base-finetuned-arfa | d4c453a94d5b04352cffffc300d5796ffb0c3091 | 2022-05-23T01:44:07.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"arabic",
"ar",
"fa",
"persian",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-arfa | 4 | null | transformers | 19,961 | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- fa
- persian
- mt5
- Abstractive Summarization
- generated_from_trainer
model-index:
- name: mt5-base-finetuned-arfa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-arfa
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1784
- Rouge-1: 25.68
- Rouge-2: 11.8
- Rouge-l: 22.99
- Gen Len: 18.99
- Bertscore: 71.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 3.9866 | 1.0 | 2649 | 3.3635 | 21.94 | 8.59 | 19.5 | 18.99 | 70.6 |
| 3.5637 | 2.0 | 5298 | 3.2557 | 24.01 | 10.0 | 21.26 | 18.99 | 71.22 |
| 3.4016 | 3.0 | 7947 | 3.2005 | 24.4 | 10.43 | 21.72 | 18.98 | 71.36 |
| 3.2985 | 4.0 | 10596 | 3.1784 | 24.68 | 10.73 | 22.01 | 18.98 | 71.51 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
globuslabs/ScholarBERT_10_WB | 0a74916473c7fc91cdf6a2ec3df5126ff24ff734 | 2022-05-24T03:16:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"arxiv:2205.11342",
"transformers",
"science",
"multi-displinary",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | globuslabs | null | globuslabs/ScholarBERT_10_WB | 4 | null | transformers | 19,962 | ---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT_10_WB Model
This is the **ScholarBERT_10_WB** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**22.1B tokens**).
Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models.
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **10% of the PRD** scientific literature dataset and Wikipedia+BookCorpus.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
``` |
krotima1/mbart-at2h-cs | 9a8788e1bf095afcd6a2f4c44c48a079c0387c3c | 2022-05-23T20:34:40.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"cs",
"dataset:private Czech News Center dataset news-based",
"dataset:SumeCzech dataset news-based",
"transformers",
"abstractive summarization",
"mbart-cc25",
"Czech",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | krotima1 | null | krotima1/mbart-at2h-cs | 4 | null | transformers | 19,963 | ---
language:
- cs
- cs
tags:
- abstractive summarization
- mbart-cc25
- Czech
license: apache-2.0
datasets:
- private Czech News Center dataset news-based
- SumeCzech dataset news-based
metrics:
- rouge
- rougeraw
---
# mBART fine-tuned model for Czech abstractive summarization (AT2H-CS)
This model is a fine-tuned checkpoint of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the Czech news dataset to produce Czech abstractive summaries.
## Task
The model deals with the task ``Abstract + Text to Headline`` (AT2H) which consists in generating a one- or two-sentence summary considered as a headline from a Czech news text.
## Dataset
The model has been trained on a large Czech news dataset developed by a concatenation of two datasets, the private CNC dataset provided by Czech News Center and [SumeCzech](https://ufal.mff.cuni.cz/sumeczech) dataset. The dataset includes around 1.75M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were set to 512 tokens for the encoder and 64 for the decoder.
## Training
The model has been trained on 1x NVIDIA Tesla A100 40GB for 40 hours, 1x NVIDIA Tesla V100 32GB for 20 hours, and 4x NVIDIA Tesla A100 40GB for 20 hours. During training, the model has seen 7936K documents corresponding to roughly 5 epochs.
# Use
Assuming that you are using the provided Summarizer.ipynb file.
```python
def summ_config():
cfg = OrderedDict([
# summarization model - checkpoint from website
("model_name", "krotima1/mbart-at2h-cs"),
("inference_cfg", OrderedDict([
("num_beams", 4),
("top_k", 40),
("top_p", 0.92),
("do_sample", True),
("temperature", 0.89),
("repetition_penalty", 1.2),
("no_repeat_ngram_size", None),
("early_stopping", True),
("max_length", 64),
("min_length", 10),
])),
#texts to summarize
("text",
[
"Input your Czech text",
]
),
])
return cfg
cfg = summ_config()
#load model
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
``` |
juancavallotti/bert_sentence_classifier | 6cf4f7eafeb370d2c88872065edf55c690e279c7 | 2022-05-23T08:40:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | juancavallotti | null | juancavallotti/bert_sentence_classifier | 4 | null | transformers | 19,964 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: bert_sentence_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sentence_classifier
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0040
- F1: 0.6123
- Precision: 0.6123
- Recall: 0.6123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:------:|:---------:|:------:|
| 2.0049 | 0.04 | 500 | 1.5854 | 0.5693 | 0.5693 | 0.5693 |
| 1.552 | 0.07 | 1000 | 1.4428 | 0.6131 | 0.6131 | 0.6131 |
| 1.502 | 0.11 | 1500 | 1.3977 | 0.6213 | 0.6213 | 0.6213 |
| 1.4515 | 0.14 | 2000 | 1.3926 | 0.6200 | 0.6200 | 0.6200 |
| 1.43 | 0.18 | 2500 | 1.3553 | 0.6350 | 0.6350 | 0.6350 |
| 1.413 | 0.21 | 3000 | 1.3461 | 0.6346 | 0.6346 | 0.6346 |
| 1.4109 | 0.25 | 3500 | 1.3199 | 0.6496 | 0.6496 | 0.6496 |
| 1.3853 | 0.28 | 4000 | 1.3338 | 0.6406 | 0.6406 | 0.6406 |
| 1.3788 | 0.32 | 4500 | 1.3306 | 0.6471 | 0.6471 | 0.6471 |
| 1.3585 | 0.35 | 5000 | 1.3295 | 0.6410 | 0.6410 | 0.6410 |
| 1.356 | 0.39 | 5500 | 1.3025 | 0.6441 | 0.6441 | 0.6441 |
| 1.3534 | 0.42 | 6000 | 1.3197 | 0.6406 | 0.6406 | 0.6406 |
| 1.3324 | 0.46 | 6500 | 1.2932 | 0.6436 | 0.6436 | 0.6436 |
| 1.3563 | 0.49 | 7000 | 1.3202 | 0.6488 | 0.6488 | 0.6488 |
| 1.3121 | 0.53 | 7500 | 1.3024 | 0.6428 | 0.6428 | 0.6428 |
| 1.3092 | 0.56 | 8000 | 1.3142 | 0.6419 | 0.6419 | 0.6419 |
| 1.3769 | 0.6 | 8500 | 1.2974 | 0.6441 | 0.6441 | 0.6441 |
| 1.3487 | 0.63 | 9000 | 1.2882 | 0.6556 | 0.6556 | 0.6556 |
| 1.3475 | 0.67 | 9500 | 1.2928 | 0.6441 | 0.6441 | 0.6441 |
| 1.3038 | 0.7 | 10000 | 1.2846 | 0.6488 | 0.6488 | 0.6488 |
| 1.3371 | 0.74 | 10500 | 1.2894 | 0.6591 | 0.6591 | 0.6591 |
| 1.3222 | 0.77 | 11000 | 1.2745 | 0.6535 | 0.6535 | 0.6535 |
| 1.2983 | 0.81 | 11500 | 1.2832 | 0.6526 | 0.6526 | 0.6526 |
| 1.3505 | 0.84 | 12000 | 1.2812 | 0.6531 | 0.6531 | 0.6531 |
| 1.2752 | 0.88 | 12500 | 1.2629 | 0.6578 | 0.6578 | 0.6578 |
| 1.3115 | 0.91 | 13000 | 1.2787 | 0.6453 | 0.6453 | 0.6453 |
| 1.3353 | 0.95 | 13500 | 1.2707 | 0.6539 | 0.6539 | 0.6539 |
| 1.2982 | 0.98 | 14000 | 1.2618 | 0.6569 | 0.6569 | 0.6569 |
| 1.1885 | 1.02 | 14500 | 1.2999 | 0.6544 | 0.6544 | 0.6544 |
| 1.1339 | 1.05 | 15000 | 1.3086 | 0.6458 | 0.6458 | 0.6458 |
| 1.0661 | 1.09 | 15500 | 1.2871 | 0.6582 | 0.6582 | 0.6582 |
| 1.109 | 1.12 | 16000 | 1.2800 | 0.6608 | 0.6608 | 0.6608 |
| 1.0305 | 1.16 | 16500 | 1.3098 | 0.6604 | 0.6604 | 0.6604 |
| 1.0855 | 1.19 | 17000 | 1.2968 | 0.6587 | 0.6587 | 0.6587 |
| 1.0933 | 1.23 | 17500 | 1.3075 | 0.6509 | 0.6509 | 0.6509 |
| 1.1229 | 1.26 | 18000 | 1.3018 | 0.6496 | 0.6496 | 0.6496 |
| 1.1043 | 1.3 | 18500 | 1.2832 | 0.6565 | 0.6565 | 0.6565 |
| 1.1344 | 1.33 | 19000 | 1.2825 | 0.6591 | 0.6591 | 0.6591 |
| 1.1467 | 1.37 | 19500 | 1.2797 | 0.6642 | 0.6642 | 0.6642 |
| 1.0596 | 1.4 | 20000 | 1.2841 | 0.6522 | 0.6522 | 0.6522 |
| 1.1286 | 1.44 | 20500 | 1.2912 | 0.6544 | 0.6544 | 0.6544 |
| 1.1219 | 1.47 | 21000 | 1.3143 | 0.6509 | 0.6509 | 0.6509 |
| 1.1339 | 1.51 | 21500 | 1.3021 | 0.6539 | 0.6539 | 0.6539 |
| 1.1091 | 1.54 | 22000 | 1.2738 | 0.6625 | 0.6625 | 0.6625 |
| 1.1403 | 1.58 | 22500 | 1.2822 | 0.6548 | 0.6548 | 0.6548 |
| 1.146 | 1.61 | 23000 | 1.2724 | 0.6587 | 0.6587 | 0.6587 |
| 1.1237 | 1.65 | 23500 | 1.2757 | 0.6569 | 0.6569 | 0.6569 |
| 1.1453 | 1.68 | 24000 | 1.2985 | 0.6535 | 0.6535 | 0.6535 |
| 1.1309 | 1.72 | 24500 | 1.2876 | 0.6578 | 0.6578 | 0.6578 |
| 1.1494 | 1.75 | 25000 | 1.2892 | 0.6552 | 0.6552 | 0.6552 |
| 1.1571 | 1.79 | 25500 | 1.2806 | 0.6548 | 0.6548 | 0.6548 |
| 1.0766 | 1.82 | 26000 | 1.2889 | 0.6509 | 0.6509 | 0.6509 |
| 1.1416 | 1.86 | 26500 | 1.2673 | 0.6599 | 0.6599 | 0.6599 |
| 1.1179 | 1.89 | 27000 | 1.2919 | 0.6501 | 0.6501 | 0.6501 |
| 1.0838 | 1.93 | 27500 | 1.3198 | 0.6488 | 0.6488 | 0.6488 |
| 1.1426 | 1.96 | 28000 | 1.2766 | 0.6561 | 0.6561 | 0.6561 |
| 1.1559 | 2.0 | 28500 | 1.2839 | 0.6561 | 0.6561 | 0.6561 |
| 0.8783 | 2.03 | 29000 | 1.3377 | 0.6509 | 0.6509 | 0.6509 |
| 0.8822 | 2.07 | 29500 | 1.3813 | 0.6501 | 0.6501 | 0.6501 |
| 0.8823 | 2.1 | 30000 | 1.3738 | 0.6514 | 0.6514 | 0.6514 |
| 0.9094 | 2.14 | 30500 | 1.3667 | 0.6522 | 0.6522 | 0.6522 |
| 0.8828 | 2.17 | 31000 | 1.3654 | 0.6582 | 0.6582 | 0.6582 |
| 0.8489 | 2.21 | 31500 | 1.3404 | 0.6556 | 0.6556 | 0.6556 |
| 0.8719 | 2.24 | 32000 | 1.4173 | 0.6393 | 0.6393 | 0.6393 |
| 0.8926 | 2.28 | 32500 | 1.4026 | 0.6535 | 0.6535 | 0.6535 |
| 0.871 | 2.31 | 33000 | 1.4133 | 0.6428 | 0.6428 | 0.6428 |
| 0.9047 | 2.35 | 33500 | 1.3915 | 0.6449 | 0.6449 | 0.6449 |
| 0.8621 | 2.38 | 34000 | 1.4109 | 0.6483 | 0.6483 | 0.6483 |
| 0.8978 | 2.42 | 34500 | 1.3675 | 0.6471 | 0.6471 | 0.6471 |
| 0.8808 | 2.45 | 35000 | 1.3826 | 0.6522 | 0.6522 | 0.6522 |
| 0.9299 | 2.49 | 35500 | 1.3673 | 0.6535 | 0.6535 | 0.6535 |
| 0.8546 | 2.52 | 36000 | 1.4034 | 0.6518 | 0.6518 | 0.6518 |
| 0.8855 | 2.56 | 36500 | 1.3763 | 0.6458 | 0.6458 | 0.6458 |
| 0.8996 | 2.59 | 37000 | 1.3930 | 0.6539 | 0.6539 | 0.6539 |
| 0.8889 | 2.63 | 37500 | 1.3966 | 0.6471 | 0.6471 | 0.6471 |
| 0.8811 | 2.66 | 38000 | 1.4131 | 0.6475 | 0.6475 | 0.6475 |
| 0.9129 | 2.7 | 38500 | 1.3816 | 0.6445 | 0.6445 | 0.6445 |
| 0.8708 | 2.73 | 39000 | 1.4354 | 0.6492 | 0.6492 | 0.6492 |
| 0.8667 | 2.77 | 39500 | 1.4076 | 0.6380 | 0.6380 | 0.6380 |
| 0.9139 | 2.8 | 40000 | 1.4200 | 0.6423 | 0.6423 | 0.6423 |
| 0.9035 | 2.84 | 40500 | 1.3913 | 0.6462 | 0.6462 | 0.6462 |
| 0.9312 | 2.87 | 41000 | 1.3806 | 0.6449 | 0.6449 | 0.6449 |
| 0.9382 | 2.91 | 41500 | 1.4064 | 0.6522 | 0.6522 | 0.6522 |
| 0.8765 | 2.95 | 42000 | 1.4146 | 0.6380 | 0.6380 | 0.6380 |
| 0.8801 | 2.98 | 42500 | 1.3898 | 0.6445 | 0.6445 | 0.6445 |
| 0.7988 | 3.02 | 43000 | 1.4740 | 0.6436 | 0.6436 | 0.6436 |
| 0.6752 | 3.05 | 43500 | 1.5622 | 0.6372 | 0.6372 | 0.6372 |
| 0.649 | 3.09 | 44000 | 1.6055 | 0.6359 | 0.6359 | 0.6359 |
| 0.669 | 3.12 | 44500 | 1.5736 | 0.6380 | 0.6380 | 0.6380 |
| 0.7189 | 3.16 | 45000 | 1.5832 | 0.6346 | 0.6346 | 0.6346 |
| 0.6724 | 3.19 | 45500 | 1.6194 | 0.6260 | 0.6260 | 0.6260 |
| 0.7139 | 3.23 | 46000 | 1.5966 | 0.6359 | 0.6359 | 0.6359 |
| 0.6985 | 3.26 | 46500 | 1.5803 | 0.6342 | 0.6342 | 0.6342 |
| 0.6503 | 3.3 | 47000 | 1.6485 | 0.6376 | 0.6376 | 0.6376 |
| 0.6879 | 3.33 | 47500 | 1.5959 | 0.6325 | 0.6325 | 0.6325 |
| 0.7342 | 3.37 | 48000 | 1.5534 | 0.6389 | 0.6389 | 0.6389 |
| 0.6838 | 3.4 | 48500 | 1.5807 | 0.6337 | 0.6337 | 0.6337 |
| 0.7295 | 3.44 | 49000 | 1.6192 | 0.6372 | 0.6372 | 0.6372 |
| 0.7044 | 3.47 | 49500 | 1.6618 | 0.6346 | 0.6346 | 0.6346 |
| 0.7071 | 3.51 | 50000 | 1.6255 | 0.6342 | 0.6342 | 0.6342 |
| 0.7055 | 3.54 | 50500 | 1.5584 | 0.6363 | 0.6363 | 0.6363 |
| 0.6781 | 3.58 | 51000 | 1.5948 | 0.6376 | 0.6376 | 0.6376 |
| 0.7004 | 3.61 | 51500 | 1.6311 | 0.6320 | 0.6320 | 0.6320 |
| 0.715 | 3.65 | 52000 | 1.5972 | 0.6423 | 0.6423 | 0.6423 |
| 0.7399 | 3.68 | 52500 | 1.6402 | 0.6325 | 0.6325 | 0.6325 |
| 0.6972 | 3.72 | 53000 | 1.6186 | 0.6406 | 0.6406 | 0.6406 |
| 0.7219 | 3.75 | 53500 | 1.5945 | 0.6359 | 0.6359 | 0.6359 |
| 0.763 | 3.79 | 54000 | 1.5900 | 0.6380 | 0.6380 | 0.6380 |
| 0.7196 | 3.82 | 54500 | 1.6218 | 0.6320 | 0.6320 | 0.6320 |
| 0.7682 | 3.86 | 55000 | 1.5538 | 0.6372 | 0.6372 | 0.6372 |
| 0.6949 | 3.89 | 55500 | 1.6209 | 0.6295 | 0.6295 | 0.6295 |
| 0.7461 | 3.93 | 56000 | 1.6237 | 0.6316 | 0.6316 | 0.6316 |
| 0.7295 | 3.96 | 56500 | 1.6011 | 0.6333 | 0.6333 | 0.6333 |
| 0.6846 | 4.0 | 57000 | 1.6899 | 0.6312 | 0.6312 | 0.6312 |
| 0.556 | 4.03 | 57500 | 1.7783 | 0.6303 | 0.6303 | 0.6303 |
| 0.5276 | 4.07 | 58000 | 1.8985 | 0.6260 | 0.6260 | 0.6260 |
| 0.5576 | 4.1 | 58500 | 1.8263 | 0.6264 | 0.6264 | 0.6264 |
| 0.5303 | 4.14 | 59000 | 1.8411 | 0.6316 | 0.6316 | 0.6316 |
| 0.5574 | 4.17 | 59500 | 1.8353 | 0.6286 | 0.6286 | 0.6286 |
| 0.5468 | 4.21 | 60000 | 1.9252 | 0.6286 | 0.6286 | 0.6286 |
| 0.532 | 4.24 | 60500 | 1.8903 | 0.6295 | 0.6295 | 0.6295 |
| 0.5329 | 4.28 | 61000 | 1.9416 | 0.6252 | 0.6252 | 0.6252 |
| 0.5539 | 4.31 | 61500 | 1.9149 | 0.6260 | 0.6260 | 0.6260 |
| 0.5661 | 4.35 | 62000 | 1.9074 | 0.6286 | 0.6286 | 0.6286 |
| 0.5502 | 4.38 | 62500 | 2.0259 | 0.6316 | 0.6316 | 0.6316 |
| 0.5658 | 4.42 | 63000 | 1.9049 | 0.6256 | 0.6256 | 0.6256 |
| 0.5958 | 4.45 | 63500 | 1.9252 | 0.6166 | 0.6166 | 0.6166 |
| 0.5972 | 4.49 | 64000 | 1.8518 | 0.6286 | 0.6286 | 0.6286 |
| 0.5964 | 4.52 | 64500 | 1.8793 | 0.6234 | 0.6234 | 0.6234 |
| 0.5506 | 4.56 | 65000 | 1.9218 | 0.6346 | 0.6346 | 0.6346 |
| 0.5516 | 4.59 | 65500 | 1.8957 | 0.6389 | 0.6389 | 0.6389 |
| 0.5777 | 4.63 | 66000 | 1.9603 | 0.6295 | 0.6295 | 0.6295 |
| 0.5953 | 4.66 | 66500 | 1.8605 | 0.6252 | 0.6252 | 0.6252 |
| 0.5797 | 4.7 | 67000 | 1.8797 | 0.6320 | 0.6320 | 0.6320 |
| 0.5836 | 4.73 | 67500 | 1.9320 | 0.6260 | 0.6260 | 0.6260 |
| 0.6019 | 4.77 | 68000 | 1.8465 | 0.6239 | 0.6239 | 0.6239 |
| 0.6099 | 4.8 | 68500 | 1.9481 | 0.6299 | 0.6299 | 0.6299 |
| 0.6064 | 4.84 | 69000 | 1.9033 | 0.6307 | 0.6307 | 0.6307 |
| 0.5836 | 4.87 | 69500 | 1.8878 | 0.6234 | 0.6234 | 0.6234 |
| 0.5766 | 4.91 | 70000 | 1.8860 | 0.6277 | 0.6277 | 0.6277 |
| 0.623 | 4.94 | 70500 | 1.8033 | 0.6303 | 0.6303 | 0.6303 |
| 0.596 | 4.98 | 71000 | 1.9038 | 0.6333 | 0.6333 | 0.6333 |
| 0.537 | 5.01 | 71500 | 2.0795 | 0.6234 | 0.6234 | 0.6234 |
| 0.4663 | 5.05 | 72000 | 2.0325 | 0.6217 | 0.6217 | 0.6217 |
| 0.4173 | 5.08 | 72500 | 2.2377 | 0.6273 | 0.6273 | 0.6273 |
| 0.4521 | 5.12 | 73000 | 2.1218 | 0.6217 | 0.6217 | 0.6217 |
| 0.4243 | 5.15 | 73500 | 2.2731 | 0.6204 | 0.6204 | 0.6204 |
| 0.4672 | 5.19 | 74000 | 2.2111 | 0.6247 | 0.6247 | 0.6247 |
| 0.4884 | 5.22 | 74500 | 2.1027 | 0.6226 | 0.6226 | 0.6226 |
| 0.4314 | 5.26 | 75000 | 2.2218 | 0.6230 | 0.6230 | 0.6230 |
| 0.4581 | 5.29 | 75500 | 2.2036 | 0.6264 | 0.6264 | 0.6264 |
| 0.4245 | 5.33 | 76000 | 2.2419 | 0.6200 | 0.6200 | 0.6200 |
| 0.4391 | 5.36 | 76500 | 2.1762 | 0.6187 | 0.6187 | 0.6187 |
| 0.4672 | 5.4 | 77000 | 2.2779 | 0.6179 | 0.6179 | 0.6179 |
| 0.4821 | 5.43 | 77500 | 2.2881 | 0.6187 | 0.6187 | 0.6187 |
| 0.4872 | 5.47 | 78000 | 2.2406 | 0.6119 | 0.6119 | 0.6119 |
| 0.4584 | 5.5 | 78500 | 2.3521 | 0.6209 | 0.6209 | 0.6209 |
| 0.4774 | 5.54 | 79000 | 2.2522 | 0.6174 | 0.6174 | 0.6174 |
| 0.5151 | 5.57 | 79500 | 2.2233 | 0.6140 | 0.6140 | 0.6140 |
| 0.493 | 5.61 | 80000 | 2.2333 | 0.6256 | 0.6256 | 0.6256 |
| 0.4846 | 5.64 | 80500 | 2.1891 | 0.6200 | 0.6200 | 0.6200 |
| 0.478 | 5.68 | 81000 | 2.3159 | 0.6196 | 0.6196 | 0.6196 |
| 0.4851 | 5.71 | 81500 | 2.2356 | 0.6234 | 0.6234 | 0.6234 |
| 0.4902 | 5.75 | 82000 | 2.3525 | 0.6222 | 0.6222 | 0.6222 |
| 0.4992 | 5.79 | 82500 | 2.2111 | 0.6067 | 0.6067 | 0.6067 |
| 0.4799 | 5.82 | 83000 | 2.2650 | 0.6131 | 0.6131 | 0.6131 |
| 0.4849 | 5.86 | 83500 | 2.2628 | 0.6204 | 0.6204 | 0.6204 |
| 0.4772 | 5.89 | 84000 | 2.2711 | 0.6174 | 0.6174 | 0.6174 |
| 0.5465 | 5.93 | 84500 | 2.2793 | 0.6144 | 0.6144 | 0.6144 |
| 0.4466 | 5.96 | 85000 | 2.2369 | 0.6166 | 0.6166 | 0.6166 |
| 0.4885 | 6.0 | 85500 | 2.1963 | 0.6217 | 0.6217 | 0.6217 |
| 0.3862 | 6.03 | 86000 | 2.4233 | 0.6174 | 0.6174 | 0.6174 |
| 0.3738 | 6.07 | 86500 | 2.4405 | 0.6191 | 0.6191 | 0.6191 |
| 0.349 | 6.1 | 87000 | 2.4512 | 0.6161 | 0.6161 | 0.6161 |
| 0.3659 | 6.14 | 87500 | 2.5251 | 0.6226 | 0.6226 | 0.6226 |
| 0.3365 | 6.17 | 88000 | 2.5326 | 0.6217 | 0.6217 | 0.6217 |
| 0.3336 | 6.21 | 88500 | 2.4413 | 0.6179 | 0.6179 | 0.6179 |
| 0.3632 | 6.24 | 89000 | 2.6415 | 0.6114 | 0.6114 | 0.6114 |
| 0.3584 | 6.28 | 89500 | 2.5388 | 0.6179 | 0.6179 | 0.6179 |
| 0.3891 | 6.31 | 90000 | 2.6418 | 0.6123 | 0.6123 | 0.6123 |
| 0.3805 | 6.35 | 90500 | 2.6223 | 0.6127 | 0.6127 | 0.6127 |
| 0.363 | 6.38 | 91000 | 2.5399 | 0.6131 | 0.6131 | 0.6131 |
| 0.3723 | 6.42 | 91500 | 2.6033 | 0.6187 | 0.6187 | 0.6187 |
| 0.3808 | 6.45 | 92000 | 2.5281 | 0.6243 | 0.6243 | 0.6243 |
| 0.3921 | 6.49 | 92500 | 2.5814 | 0.6007 | 0.6007 | 0.6007 |
| 0.3763 | 6.52 | 93000 | 2.6656 | 0.6058 | 0.6058 | 0.6058 |
| 0.3921 | 6.56 | 93500 | 2.4935 | 0.6084 | 0.6084 | 0.6084 |
| 0.3737 | 6.59 | 94000 | 2.7270 | 0.6166 | 0.6166 | 0.6166 |
| 0.3766 | 6.63 | 94500 | 2.5289 | 0.6217 | 0.6217 | 0.6217 |
| 0.4439 | 6.66 | 95000 | 2.6161 | 0.6222 | 0.6222 | 0.6222 |
| 0.4166 | 6.7 | 95500 | 2.5298 | 0.6123 | 0.6123 | 0.6123 |
| 0.4064 | 6.73 | 96000 | 2.5952 | 0.6183 | 0.6183 | 0.6183 |
| 0.4253 | 6.77 | 96500 | 2.4567 | 0.6127 | 0.6127 | 0.6127 |
| 0.3754 | 6.8 | 97000 | 2.5473 | 0.6131 | 0.6131 | 0.6131 |
| 0.3993 | 6.84 | 97500 | 2.5563 | 0.6161 | 0.6161 | 0.6161 |
| 0.3802 | 6.87 | 98000 | 2.6585 | 0.6076 | 0.6076 | 0.6076 |
| 0.4504 | 6.91 | 98500 | 2.5700 | 0.6127 | 0.6127 | 0.6127 |
| 0.3832 | 6.94 | 99000 | 2.5983 | 0.6174 | 0.6174 | 0.6174 |
| 0.4212 | 6.98 | 99500 | 2.6137 | 0.6110 | 0.6110 | 0.6110 |
| 0.3253 | 7.01 | 100000 | 2.8467 | 0.6024 | 0.6024 | 0.6024 |
| 0.2553 | 7.05 | 100500 | 2.7412 | 0.6063 | 0.6063 | 0.6063 |
| 0.2771 | 7.08 | 101000 | 2.8670 | 0.6101 | 0.6101 | 0.6101 |
| 0.2733 | 7.12 | 101500 | 2.8536 | 0.6166 | 0.6166 | 0.6166 |
| 0.2972 | 7.15 | 102000 | 2.8254 | 0.6161 | 0.6161 | 0.6161 |
| 0.2893 | 7.19 | 102500 | 3.0228 | 0.6058 | 0.6058 | 0.6058 |
| 0.3104 | 7.22 | 103000 | 2.8617 | 0.6011 | 0.6011 | 0.6011 |
| 0.3019 | 7.26 | 103500 | 3.0106 | 0.6131 | 0.6131 | 0.6131 |
| 0.3143 | 7.29 | 104000 | 3.0189 | 0.6088 | 0.6088 | 0.6088 |
| 0.3054 | 7.33 | 104500 | 3.0291 | 0.6063 | 0.6063 | 0.6063 |
| 0.3145 | 7.36 | 105000 | 3.0166 | 0.6106 | 0.6106 | 0.6106 |
| 0.2913 | 7.4 | 105500 | 3.0480 | 0.6174 | 0.6174 | 0.6174 |
| 0.3159 | 7.43 | 106000 | 2.9714 | 0.6084 | 0.6084 | 0.6084 |
| 0.3216 | 7.47 | 106500 | 2.9359 | 0.6187 | 0.6187 | 0.6187 |
| 0.2982 | 7.5 | 107000 | 3.0509 | 0.6084 | 0.6084 | 0.6084 |
| 0.2952 | 7.54 | 107500 | 2.9428 | 0.6076 | 0.6076 | 0.6076 |
| 0.304 | 7.57 | 108000 | 3.0155 | 0.6071 | 0.6071 | 0.6071 |
| 0.2896 | 7.61 | 108500 | 3.0276 | 0.6196 | 0.6196 | 0.6196 |
| 0.3226 | 7.64 | 109000 | 2.9331 | 0.6097 | 0.6097 | 0.6097 |
| 0.299 | 7.68 | 109500 | 2.9671 | 0.6050 | 0.6050 | 0.6050 |
| 0.3079 | 7.71 | 110000 | 2.9394 | 0.6093 | 0.6093 | 0.6093 |
| 0.3064 | 7.75 | 110500 | 2.8690 | 0.6110 | 0.6110 | 0.6110 |
| 0.3423 | 7.78 | 111000 | 2.9095 | 0.6183 | 0.6183 | 0.6183 |
| 0.3085 | 7.82 | 111500 | 2.9967 | 0.6260 | 0.6260 | 0.6260 |
| 0.3071 | 7.85 | 112000 | 2.9429 | 0.6127 | 0.6127 | 0.6127 |
| 0.3197 | 7.89 | 112500 | 3.0123 | 0.6157 | 0.6157 | 0.6157 |
| 0.3361 | 7.92 | 113000 | 2.9832 | 0.6170 | 0.6170 | 0.6170 |
| 0.3252 | 7.96 | 113500 | 3.0174 | 0.6071 | 0.6071 | 0.6071 |
| 0.2802 | 7.99 | 114000 | 3.0040 | 0.6123 | 0.6123 | 0.6123 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-small-japanese-aozora | 689ba5aaf16947395ddb1bee1f50938b8001be15 | 2022-05-24T03:59:55.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-small-japanese-aozora | 4 | null | transformers | 19,965 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-small-japanese-aozora
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts. You can fine-tune `deberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-small-japanese-aozora")
```
|
versae/bertin-roberta-base-spanish-finetuned-recores3 | c7adb28ff7ee1a5c708f6de62870b247f5aebc55 | 2022-05-23T14:13:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
] | multiple-choice | false | versae | null | versae/bertin-roberta-base-spanish-finetuned-recores3 | 4 | null | transformers | 19,966 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertin-roberta-base-spanish-finetuned-recores3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-base-spanish-finetuned-recores3
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0975
- Accuracy: 0.3884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6095 | 1.0 | 524 | 1.6094 | 0.2342 |
| 1.607 | 2.0 | 1048 | 1.5612 | 0.3058 |
| 1.4059 | 3.0 | 1572 | 1.6292 | 0.3361 |
| 0.7047 | 4.0 | 2096 | 2.5111 | 0.4132 |
| 0.2671 | 5.0 | 2620 | 3.2399 | 0.3499 |
| 0.1065 | 6.0 | 3144 | 5.1217 | 0.3444 |
| 0.0397 | 7.0 | 3668 | 4.3270 | 0.3691 |
| 0.0162 | 8.0 | 4192 | 5.1796 | 0.3719 |
| 0.0096 | 9.0 | 4716 | 5.2161 | 0.3994 |
| 0.0118 | 10.0 | 5240 | 4.9225 | 0.3719 |
| 0.0015 | 11.0 | 5764 | 5.0544 | 0.3829 |
| 0.0091 | 12.0 | 6288 | 5.7731 | 0.3884 |
| 0.0052 | 13.0 | 6812 | 4.1606 | 0.3939 |
| 0.0138 | 14.0 | 7336 | 6.2725 | 0.3857 |
| 0.0027 | 15.0 | 7860 | 6.2274 | 0.3857 |
| 0.0003 | 16.0 | 8384 | 6.0935 | 0.4022 |
| 0.0002 | 17.0 | 8908 | 5.7650 | 0.3994 |
| 0.0 | 18.0 | 9432 | 6.3595 | 0.4215 |
| 0.0 | 19.0 | 9956 | 5.8934 | 0.3747 |
| 0.0001 | 20.0 | 10480 | 6.0571 | 0.3884 |
| 0.0 | 21.0 | 11004 | 6.0718 | 0.3884 |
| 0.0 | 22.0 | 11528 | 6.0844 | 0.3884 |
| 0.0 | 23.0 | 12052 | 6.0930 | 0.3884 |
| 0.0 | 24.0 | 12576 | 6.0966 | 0.3884 |
| 0.0 | 25.0 | 13100 | 6.0975 | 0.3884 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
peter2000/xlm-roberta-base-finetuned-osdg | 718eec41293aca60dc38e608301df044fc06f92c | 2022-05-24T08:50:18.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | peter2000 | null | peter2000/xlm-roberta-base-finetuned-osdg | 4 | null | transformers | 19,967 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-osdg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-osdg
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Acc: 0.8296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.552 | 1.0 | 509 | 0.6801 | 0.8229 |
| 0.5261 | 2.0 | 1018 | 0.6821 | 0.8218 |
| 0.5518 | 3.0 | 1527 | 0.6770 | 0.8246 |
| 0.4856 | 4.0 | 2036 | 0.6781 | 0.8279 |
| 0.5427 | 5.0 | 2545 | 0.6748 | 0.8318 |
| 0.5049 | 6.0 | 3054 | 0.6769 | 0.8290 |
| 0.5155 | 7.0 | 3563 | 0.6756 | 0.8307 |
| 0.503 | 8.0 | 4072 | 0.6763 | 0.8296 |
| 0.5009 | 9.0 | 4581 | 0.6741 | 0.8301 |
| 0.555 | 10.0 | 5090 | 0.6747 | 0.8296 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.causalm.food__service.2-class.exclusive.seed_42 | 18d28b45cc30242f73582e9313df57c106f83aea | 2022-05-24T12:09:09.000Z | [
"pytorch",
"bert_causalm",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.causalm.food__service.2-class.exclusive.seed_42 | 4 | null | transformers | 19,968 | Entry not found |
juancavallotti/bert-zs-sentence-classifier | a131bbdbde31c8c2240d5256e76376d3fa6d163e | 2022-05-23T22:31:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | juancavallotti | null | juancavallotti/bert-zs-sentence-classifier | 4 | null | transformers | 19,969 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-zs-sentence-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-zs-sentence-classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- F1: 0.8483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5973 | 0.01 | 500 | 0.5186 | 0.7538 |
| 0.5021 | 0.03 | 1000 | 0.4646 | 0.7996 |
| 0.4741 | 0.04 | 1500 | 0.4634 | 0.8064 |
| 0.4656 | 0.06 | 2000 | 0.4485 | 0.8142 |
| 0.4567 | 0.07 | 2500 | 0.4345 | 0.8160 |
| 0.4448 | 0.09 | 3000 | 0.4239 | 0.8228 |
| 0.4403 | 0.1 | 3500 | 0.4155 | 0.8294 |
| 0.4163 | 0.12 | 4000 | 0.4021 | 0.8290 |
| 0.4205 | 0.13 | 4500 | 0.4057 | 0.8283 |
| 0.416 | 0.14 | 5000 | 0.4049 | 0.8319 |
| 0.4115 | 0.16 | 5500 | 0.4095 | 0.8280 |
| 0.4156 | 0.17 | 6000 | 0.3927 | 0.8349 |
| 0.4042 | 0.19 | 6500 | 0.4003 | 0.8392 |
| 0.4057 | 0.2 | 7000 | 0.3929 | 0.8385 |
| 0.3977 | 0.22 | 7500 | 0.3915 | 0.8406 |
| 0.4049 | 0.23 | 8000 | 0.3785 | 0.8433 |
| 0.4027 | 0.24 | 8500 | 0.3807 | 0.8424 |
| 0.4096 | 0.26 | 9000 | 0.3768 | 0.8435 |
| 0.3958 | 0.27 | 9500 | 0.3846 | 0.8420 |
| 0.4037 | 0.29 | 10000 | 0.3808 | 0.8381 |
| 0.3813 | 0.3 | 10500 | 0.4004 | 0.8415 |
| 0.3934 | 0.32 | 11000 | 0.3821 | 0.8422 |
| 0.3895 | 0.33 | 11500 | 0.3844 | 0.8428 |
| 0.3907 | 0.35 | 12000 | 0.3847 | 0.8435 |
| 0.3862 | 0.36 | 12500 | 0.3803 | 0.8431 |
| 0.3958 | 0.37 | 13000 | 0.3739 | 0.8392 |
| 0.3845 | 0.39 | 13500 | 0.3817 | 0.8422 |
| 0.3914 | 0.4 | 14000 | 0.3857 | 0.8424 |
| 0.3814 | 0.42 | 14500 | 0.3793 | 0.8438 |
| 0.3816 | 0.43 | 15000 | 0.3843 | 0.8395 |
| 0.4022 | 0.45 | 15500 | 0.3737 | 0.8436 |
| 0.3879 | 0.46 | 16000 | 0.3750 | 0.8424 |
| 0.3794 | 0.48 | 16500 | 0.3743 | 0.8410 |
| 0.393 | 0.49 | 17000 | 0.3733 | 0.8461 |
| 0.384 | 0.5 | 17500 | 0.3765 | 0.8476 |
| 0.3782 | 0.52 | 18000 | 0.3748 | 0.8451 |
| 0.3931 | 0.53 | 18500 | 0.3807 | 0.8454 |
| 0.3889 | 0.55 | 19000 | 0.3653 | 0.8463 |
| 0.386 | 0.56 | 19500 | 0.3707 | 0.8445 |
| 0.3802 | 0.58 | 20000 | 0.3700 | 0.8474 |
| 0.3883 | 0.59 | 20500 | 0.3646 | 0.8463 |
| 0.3825 | 0.61 | 21000 | 0.3665 | 0.8513 |
| 0.382 | 0.62 | 21500 | 0.3620 | 0.8508 |
| 0.3795 | 0.63 | 22000 | 0.3692 | 0.8493 |
| 0.367 | 0.65 | 22500 | 0.3704 | 0.8479 |
| 0.3825 | 0.66 | 23000 | 0.3723 | 0.8472 |
| 0.3902 | 0.68 | 23500 | 0.3681 | 0.8465 |
| 0.3813 | 0.69 | 24000 | 0.3668 | 0.8515 |
| 0.3878 | 0.71 | 24500 | 0.3632 | 0.8506 |
| 0.3743 | 0.72 | 25000 | 0.3728 | 0.8463 |
| 0.3826 | 0.73 | 25500 | 0.3746 | 0.8465 |
| 0.3892 | 0.75 | 26000 | 0.3602 | 0.8518 |
| 0.3767 | 0.76 | 26500 | 0.3722 | 0.8513 |
| 0.3724 | 0.78 | 27000 | 0.3716 | 0.8499 |
| 0.3767 | 0.79 | 27500 | 0.3651 | 0.8483 |
| 0.3846 | 0.81 | 28000 | 0.3753 | 0.8493 |
| 0.3748 | 0.82 | 28500 | 0.3720 | 0.8458 |
| 0.3768 | 0.84 | 29000 | 0.3663 | 0.8508 |
| 0.3716 | 0.85 | 29500 | 0.3635 | 0.8531 |
| 0.3673 | 0.86 | 30000 | 0.3659 | 0.8485 |
| 0.3805 | 0.88 | 30500 | 0.3608 | 0.8518 |
| 0.3718 | 0.89 | 31000 | 0.3695 | 0.8520 |
| 0.374 | 0.91 | 31500 | 0.3631 | 0.8485 |
| 0.3871 | 0.92 | 32000 | 0.3659 | 0.8485 |
| 0.3724 | 0.94 | 32500 | 0.3584 | 0.8518 |
| 0.3756 | 0.95 | 33000 | 0.3587 | 0.8492 |
| 0.3709 | 0.97 | 33500 | 0.3700 | 0.8488 |
| 0.376 | 0.98 | 34000 | 0.3657 | 0.8492 |
| 0.372 | 0.99 | 34500 | 0.3663 | 0.8483 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ismail-lucifer011/autotrain-company_all-903429540 | bfc9313b5ef49ff46dac85bd335f7a49e966c2e3 | 2022-05-24T13:52:50.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:ismail-lucifer011/autotrain-data-company_all",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | ismail-lucifer011 | null | ismail-lucifer011/autotrain-company_all-903429540 | 4 | null | transformers | 19,970 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ismail-lucifer011/autotrain-data-company_all
co2_eq_emissions: 119.04546626922827
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 903429540
- CO2 Emissions (in grams): 119.04546626922827
## Validation Metrics
- Loss: 0.00617758184671402
- Accuracy: 0.9981441241415306
- Precision: 0.9826569893335472
- Recall: 0.9839294138903667
- F1: 0.9832927899686521
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ismail-lucifer011/autotrain-company_all-903429540
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ismail-lucifer011/autotrain-company_all-903429540", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ismail-lucifer011/autotrain-company_all-903429540", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
stevemobs/deberta-base-finetuned-aqa | 304db0cfe9a79e419be46cc76a78b7d08780957e | 2022-05-24T16:35:00.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"dataset:adversarial_qa",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-finetuned-aqa | 4 | null | transformers | 19,971 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- adversarial_qa
model-index:
- name: deberta-base-finetuned-aqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-aqa
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1054 | 1.0 | 2527 | 1.6947 |
| 1.5387 | 2.0 | 5054 | 1.6394 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
peggyhuang/roberta-canard | a3c36aa7147bd64d7ab8b84653c3d1988f7870c5 | 2022-05-24T20:39:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | peggyhuang | null | peggyhuang/roberta-canard | 4 | null | transformers | 19,972 | Entry not found |
emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__xlm-roberta-base | 6ff54acfc8236740dc59014ba0094381671b9a0c | 2022-05-26T08:38:26.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_none_on_reddit__prcnt_100__test_run_False__xlm-roberta-base | 4 | null | transformers | 19,973 | Entry not found |
castorini/monot5-small-msmarco-100k | d1490270598cf288131cc8cb3d3f2b6148203234 | 2022-05-25T15:08:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/monot5-small-msmarco-100k | 4 | null | transformers | 19,974 | This model is a T5-small reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 1 epoch).
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
|
emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__xlm-roberta-base | 2e0f777ee72c8b42f1933cae9826d017612bce4d | 2022-05-27T00:13:19.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__xlm-roberta-base | 4 | null | transformers | 19,975 | Entry not found |
aioxlabs/dvoice-kabyle | bef2fda72d557f732294d962d8748ed122d8d6c3 | 2022-05-28T08:21:21.000Z | [
"wav2vec2",
"feature-extraction",
"kab",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | aioxlabs | null | aioxlabs/dvoice-kabyle | 4 | null | speechbrain | 19,976 | ---
language: "kab"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on DVoice Kabyle (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on a [CommonVoice](https://commonvoice.mozilla.org/) Kabyle dataset within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
| DVoice Release | Val. CER | Val. WER | Test CER | Test WER |
|:-------------:|:---------------------------:| -----:| -----:| -----:|
| v2.0 | 6.67 | 25.22 | 6.55 | 24.80 |
# Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions.
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
# Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read the SpeechBrain tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
# Transcribing your own audio files (in Kabyle)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-kabyle", savedir="pretrained_models/asr-wav2vec2-dvoice-wol")
asr_model.transcribe_file('./the_path_to_your_audio_file')
```
# Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
# Training
To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice).
# Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# About DVoice
DVoice is a community initiative that aims to provide Africa low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrived from social medias. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke.
For this project, AIOX Labs the SI2M Laboratory are joining forces to build the future of technologies together.
# About AIOX Labs
Based in Rabat, London and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies.
- He is at the service of the growth of groups, the optimization of processes or the improvement of the customer experience.
- AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods.
- Business ready data products with a solid algorithmic base and adaptability for the specific needs of each client.
- A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications.
Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/)
# SI2M Laboratory
The Information Systems, Intelligent Systems and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network and System Security, Mathematical Modelling.
Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique)
# About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
# Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
# Acknowledgements
This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution. |
Xuan-Rui/pet-1000-iPT.p4PTmBERT | a921f9417046304d104b4bd4add3f12ab48b0228 | 2022-05-27T04:13:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-iPT.p4PTmBERT | 4 | null | transformers | 19,977 | Entry not found |
Xuan-Rui/pet-1000-iPT.p4PTptBERT | 79e496bb2e628b5d0db5ac7f085f4d433a2b4d07 | 2022-05-27T04:21:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Xuan-Rui | null | Xuan-Rui/pet-1000-iPT.p4PTptBERT | 4 | null | transformers | 19,978 | Entry not found |
teppei727/bart-base-finetuned-amazon-onlyen | 9186635a0a713415b2342d3818299ed426582570 | 2022-05-27T08:16:49.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:amazon_reviews_multi",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | teppei727 | null | teppei727/bart-base-finetuned-amazon-onlyen | 4 | null | transformers | 19,979 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- rouge
model-index:
- name: bart-base-finetuned-amazon-onlyen
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Rouge1
type: rouge
value: 17.2662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-amazon-onlyen
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7572
- Rouge1: 17.2662
- Rouge2: 8.7425
- Rougel: 16.5765
- Rougelsum: 16.6844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.9212 | 1.0 | 771 | 2.8034 | 15.381 | 8.5254 | 15.223 | 15.059 |
| 2.3109 | 2.0 | 1542 | 2.8386 | 19.8947 | 11.0965 | 19.4876 | 19.5366 |
| 1.8973 | 3.0 | 2313 | 2.9258 | 17.7443 | 8.9232 | 17.311 | 17.1796 |
| 1.5421 | 4.0 | 3084 | 3.0696 | 17.8204 | 8.8919 | 17.3889 | 17.205 |
| 1.2391 | 5.0 | 3855 | 3.2609 | 15.9828 | 8.0523 | 15.393 | 15.3808 |
| 0.9736 | 6.0 | 4626 | 3.4080 | 15.7572 | 8.806 | 15.2435 | 15.3036 |
| 0.7824 | 7.0 | 5397 | 3.5537 | 18.4389 | 9.5135 | 17.7836 | 17.8758 |
| 0.6233 | 8.0 | 6168 | 3.6909 | 14.6698 | 6.9584 | 13.9417 | 14.0057 |
| 0.5086 | 9.0 | 6939 | 3.7357 | 16.9465 | 7.7604 | 16.1993 | 16.2963 |
| 0.4412 | 10.0 | 7710 | 3.7572 | 17.2662 | 8.7425 | 16.5765 | 16.6844 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kenkaneki/bert-base-aeslc-da | 372609c0e8fd0e9ea87291d60fc24a79139361ed | 2022-05-27T20:35:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kenkaneki | null | kenkaneki/bert-base-aeslc-da | 4 | null | transformers | 19,980 | Entry not found |
Abdelrahman-Rezk/bert-base-arabic-camelbert-mix-poetry-finetuned-qawaf2 | a4c747d593c386b37fd8fe91b91e90f708acfa11 | 2022-05-27T21:17:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Abdelrahman-Rezk | null | Abdelrahman-Rezk/bert-base-arabic-camelbert-mix-poetry-finetuned-qawaf2 | 4 | null | transformers | 19,981 | Entry not found |
PDRES/roberta-base-bne-finetuned-amazon_reviews_multi | a38121201eb167e6268e6cbc7976aae5987e4bd1 | 2022-05-28T06:21:35.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | PDRES | null | PDRES/roberta-base-bne-finetuned-amazon_reviews_multi | 4 | null | transformers | 19,982 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Ritvik19/autotrain-sentiment_polarity-918130222 | 4c9d71c44c804196b543d2ccbce6b5bc32e66288 | 2022-05-28T14:18:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:Ritvik19/autotrain-data-sentiment_polarity",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Ritvik19 | null | Ritvik19/autotrain-sentiment_polarity-918130222 | 4 | null | transformers | 19,983 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ritvik19/autotrain-data-sentiment_polarity
co2_eq_emissions: 4.280488237750762
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 918130222
- CO2 Emissions (in grams): 4.280488237750762
## Validation Metrics
- Loss: 0.13608604669570923
- Accuracy: 0.9504804036293305
- Precision: 0.9792047060317863
- Recall: 0.9647185343057701
- AUC: 0.9791895292939061
- F1: 0.9719076444852428
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ritvik19/autotrain-sentiment_polarity-918130222
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ritvik19/autotrain-sentiment_polarity-918130222", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ritvik19/autotrain-sentiment_polarity-918130222", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
zenkri/autotrain-Arabic_Poetry_by_Subject-920730227 | 8da60381502ea1d0600ea6611db3fce44035e955 | 2022-05-28T08:39:47.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:zenkri/autotrain-data-Arabic_Poetry_by_Subject-1d8ba412",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | zenkri | null | zenkri/autotrain-Arabic_Poetry_by_Subject-920730227 | 4 | null | transformers | 19,984 | ---
tags: autotrain
language: ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zenkri/autotrain-data-Arabic_Poetry_by_Subject-1d8ba412
co2_eq_emissions: 0.06170374019107819
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 920730227
- CO2 Emissions (in grams): 0.06170374019107819
## Validation Metrics
- Loss: 0.5905918478965759
- Accuracy: 0.8687837028160575
- Macro F1: 0.7777187122151491
- Micro F1: 0.8687837028160575
- Weighted F1: 0.8673230166815299
- Macro Precision: 0.796117563625016
- Micro Precision: 0.8687837028160575
- Weighted Precision: 0.8692944353097692
- Macro Recall: 0.7732013751753718
- Micro Recall: 0.8687837028160575
- Weighted Recall: 0.8687837028160575
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zenkri/autotrain-Arabic_Poetry_by_Subject-920730227
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("zenkri/autotrain-Arabic_Poetry_by_Subject-920730227", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("zenkri/autotrain-Arabic_Poetry_by_Subject-920730227", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
GioReg/dbmdzBERTnews | f7e6b2aca5bda59d2e42c4d28a37c9e932215a66 | 2022-05-28T12:56:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | GioReg | null | GioReg/dbmdzBERTnews | 4 | null | transformers | 19,985 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dbmdzBERTnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbmdzBERTnews
This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0960
- Accuracy: 0.9733
- F1: 0.9730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/umbertoBERTnews | b545a797079853ff6c0f4514a23c0368c677dce5 | 2022-05-28T14:01:45.000Z | [
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/umbertoBERTnews | 4 | null | transformers | 19,986 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: umbertoBERTnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# umbertoBERTnews
This model is a fine-tuned version of [Musixmatch/umberto-commoncrawl-cased-v1](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0847
- Accuracy: 0.9798
- F1: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GioReg/mBERTrecensioni | d238e5b0a6f96b99c1572ab8b924c2124ada59ee | 2022-05-28T15:35:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | GioReg | null | GioReg/mBERTrecensioni | 4 | null | transformers | 19,987 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mBERTrecensioni
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERTrecensioni
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3 | f2e3e1e46834481197e7382ac81b86b59f64a919 | 2022-05-29T19:18:42.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3 | 4 | null | transformers | 19,988 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 42.2455
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3-arxiv1o3
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3](https://huggingface.co/theojolliffe/bart-large-cnn-pubmed1o3-pubmed2o3-pubmed3o3) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1825
- Rouge1: 42.2455
- Rouge2: 15.6488
- Rougel: 24.4935
- Rougelsum: 37.9427
- Gen Len: 131.1379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.185 | 1.0 | 33840 | 2.1825 | 42.2455 | 15.6488 | 24.4935 | 37.9427 | 131.1379 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
zoha/wav2vec2-base-common-voice-90p-persian-colab | 879a28d30e1db43b6da7d43aef8a2fb69f6f33d3 | 2022-05-28T20:21:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-base-common-voice-90p-persian-colab | 4 | null | transformers | 19,989 | Entry not found |
GioReg/notiBERTrecensioni | bd710ceabd8011cac576e26803b26ef6fddb04a5 | 2022-05-28T17:47:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/notiBERTrecensioni | 4 | null | transformers | 19,990 | ---
tags:
- generated_from_trainer
model-index:
- name: notiBERTrecensioni
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# notiBERTrecensioni
This model is a fine-tuned version of [GioReg/notiBERTo](https://huggingface.co/GioReg/notiBERTo) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KDB/bert-base-finetuned-sts | 03cfac002667c95e654b0e98d85e1fb401b2b36d | 2022-05-30T03:59:09.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | KDB | null | KDB/bert-base-finetuned-sts | 4 | null | transformers | 19,991 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8970473420720607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4770
- Pearsonr: 0.8970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.6330 | 0.8717 |
| No log | 2.0 | 184 | 0.6206 | 0.8818 |
| No log | 3.0 | 276 | 0.5010 | 0.8947 |
| No log | 4.0 | 368 | 0.4717 | 0.8956 |
| No log | 5.0 | 460 | 0.4770 | 0.8970 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
chrisvinsen/xlsr-wav2vec2-final-1-lm-3 | 38e356aa0c10c3bf0d7eee484092e10b7601fe4d | 2022-06-02T23:23:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-final-1-lm-3 | 4 | null | transformers | 19,992 | Indonli + CommonVoice8.0 Dataset --> Train + Validation + Test
WER : 0.216
WER with LM: 0.104 |
sriiikar/wav2vec2-hindi-3 | aadba336473e345b85b2667b223217dd98a590d2 | 2022-05-29T11:42:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sriiikar | null | sriiikar/wav2vec2-hindi-3 | 4 | null | transformers | 19,993 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-hindi-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hindi-3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0900
- Wer: 0.7281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.609 | 6.41 | 1000 | 1.2290 | 0.7497 |
| 0.3754 | 12.82 | 2000 | 1.5350 | 0.7128 |
| 0.1587 | 19.23 | 3000 | 1.8671 | 0.7322 |
| 0.103 | 25.64 | 4000 | 1.9383 | 0.7300 |
| 0.0761 | 32.05 | 5000 | 2.0767 | 0.7306 |
| 0.0616 | 38.46 | 6000 | 2.0900 | 0.7281 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
YeRyeongLee/bert-base-uncased-finetuned-removed-0529 | 652f91a943c0d549518b2d5ba63d5e94e7ee26c8 | 2022-05-29T15:03:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-removed-0529 | 4 | null | transformers | 19,994 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-removed-0529
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-removed-0529
This model is a fine-tuned version of [YeRyeongLee/bert-base-uncased-finetuned-0505-2](https://huggingface.co/YeRyeongLee/bert-base-uncased-finetuned-0505-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.8767
- F1: 0.8765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.5072 | 0.8358 | 0.8373 |
| No log | 2.0 | 6360 | 0.5335 | 0.8566 | 0.8564 |
| No log | 3.0 | 9540 | 0.6317 | 0.8594 | 0.8603 |
| No log | 4.0 | 12720 | 0.6781 | 0.8723 | 0.8727 |
| No log | 5.0 | 15900 | 0.8235 | 0.8679 | 0.8682 |
| No log | 6.0 | 19080 | 0.9205 | 0.8676 | 0.8674 |
| No log | 7.0 | 22260 | 0.9898 | 0.8698 | 0.8695 |
| 0.2348 | 8.0 | 25440 | 1.0756 | 0.8695 | 0.8695 |
| 0.2348 | 9.0 | 28620 | 1.1342 | 0.8739 | 0.8735 |
| 0.2348 | 10.0 | 31800 | 1.1501 | 0.8767 | 0.8765 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
GioReg/bertNEGsentiment | 503b75126fc3be211d06cbaf24ad6e7f10a24a12 | 2022-05-29T08:24:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | GioReg | null | GioReg/bertNEGsentiment | 4 | null | transformers | 19,995 | ---
tags:
- generated_from_trainer
model-index:
- name: bertNEGsentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertNEGsentiment
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
YeRyeongLee/bert-base-uncased-finetuned-removed-0530 | 7ba2a2adced163f4fe876b121f442b1dfd714eba | 2022-05-30T03:13:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/bert-base-uncased-finetuned-removed-0530 | 4 | null | transformers | 19,996 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-removed-0530
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1269
- Accuracy: 0.8745
- F1: 0.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.5939 | 0.8113 | 0.8113 |
| No log | 2.0 | 6360 | 0.6459 | 0.8189 | 0.8183 |
| No log | 3.0 | 9540 | 0.6523 | 0.8597 | 0.8604 |
| No log | 4.0 | 12720 | 0.8159 | 0.8522 | 0.8521 |
| No log | 5.0 | 15900 | 0.9294 | 0.8601 | 0.8599 |
| No log | 6.0 | 19080 | 1.0066 | 0.8594 | 0.8592 |
| No log | 7.0 | 22260 | 1.0268 | 0.8686 | 0.8689 |
| 0.2451 | 8.0 | 25440 | 1.0274 | 0.8758 | 0.8760 |
| 0.2451 | 9.0 | 28620 | 1.0850 | 0.8726 | 0.8727 |
| 0.2451 | 10.0 | 31800 | 1.1269 | 0.8745 | 0.8745 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
YeRyeongLee/roberta-base-finetuned-removed-0530 | 4b15bacdaede3640d136b45b639c70f25cf59950 | 2022-05-30T06:26:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | YeRyeongLee | null | YeRyeongLee/roberta-base-finetuned-removed-0530 | 4 | null | transformers | 19,997 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-removed-0530
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7910
- Accuracy: 0.9082
- F1: 0.9084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6250 | 0.8277 | 0.8250 |
| No log | 2.0 | 6360 | 0.4578 | 0.8689 | 0.8684 |
| No log | 3.0 | 9540 | 0.4834 | 0.8792 | 0.8797 |
| No log | 4.0 | 12720 | 0.6377 | 0.8899 | 0.8902 |
| No log | 5.0 | 15900 | 0.6498 | 0.8921 | 0.8921 |
| No log | 6.0 | 19080 | 0.6628 | 0.8931 | 0.8928 |
| No log | 7.0 | 22260 | 0.7380 | 0.8925 | 0.8918 |
| 0.2877 | 8.0 | 25440 | 0.7313 | 0.8975 | 0.8974 |
| 0.2877 | 9.0 | 28620 | 0.7593 | 0.9025 | 0.9026 |
| 0.2877 | 10.0 | 31800 | 0.7910 | 0.9082 | 0.9084 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
|
CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545 | 398a8240ffd202eb74b57c45bbba369829efbb02 | 2022-05-30T06:32:34.000Z | [
"pytorch",
"camembert",
"text-classification",
"unk",
"dataset:CH0KUN/autotrain-data-TNC_Data1000_wangchanBERTa",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | CH0KUN | null | CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545 | 4 | null | transformers | 19,998 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- CH0KUN/autotrain-data-TNC_Data1000_wangchanBERTa
co2_eq_emissions: 0.03882318406133382
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 927730545
- CO2 Emissions (in grams): 0.03882318406133382
## Validation Metrics
- Loss: 0.346664160490036
- Accuracy: 0.9212962962962963
- Macro F1: 0.9193830593356196
- Micro F1: 0.9212962962962963
- Weighted F1: 0.9213272351125573
- Macro Precision: 0.920255423800781
- Micro Precision: 0.9212962962962963
- Weighted Precision: 0.9231182355921642
- Macro Recall: 0.920208415963133
- Micro Recall: 0.9212962962962963
- Weighted Recall: 0.9212962962962963
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Data1000_wangchanBERTa-927730545", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564 | 796ad0d715f86460a41cce11f3c1b79ea786884b | 2022-05-30T07:27:02.000Z | [
"pytorch",
"camembert",
"text-classification",
"unk",
"dataset:CH0KUN/autotrain-data-TNC_Data2500_WangchanBERTa",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | CH0KUN | null | CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564 | 4 | null | transformers | 19,999 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- CH0KUN/autotrain-data-TNC_Data2500_WangchanBERTa
co2_eq_emissions: 0.07293362913158113
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 928030564
- CO2 Emissions (in grams): 0.07293362913158113
## Validation Metrics
- Loss: 0.4989683926105499
- Accuracy: 0.8445845697329377
- Macro F1: 0.8407629450432429
- Micro F1: 0.8445845697329377
- Weighted F1: 0.8407629450432429
- Macro Precision: 0.8390327354531153
- Micro Precision: 0.8445845697329377
- Weighted Precision: 0.8390327354531154
- Macro Recall: 0.8445845697329377
- Micro Recall: 0.8445845697329377
- Weighted Recall: 0.8445845697329377
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.