modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 06:28:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 06:24:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
masakhane/mt5_pcm_en_news | masakhane | 2022-09-24T15:06:19Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:53:01Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/mbart50_pcm_en_news | masakhane | 2022-09-24T15:06:18Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:53:57Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_news | masakhane | 2022-09-24T15:06:16Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:56:54Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_news_ft | masakhane | 2022-09-24T15:06:15Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:57:27Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_ft | masakhane | 2022-09-24T15:06:14Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:58:24Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel | masakhane | 2022-09-24T15:06:13Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:58:50Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/afrimt5_yor_en_news | masakhane | 2022-09-24T15:06:11Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:11:28Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afrimbart_yor_en_news | masakhane | 2022-09-24T15:06:11Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:11:53Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afribyt5_yor_en_news | masakhane | 2022-09-24T15:06:10Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:13:17Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afrimbart_en_yor_news | masakhane | 2022-09-24T15:06:10Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:12:10Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/byt5_en_yor_news | masakhane | 2022-09-24T15:06:09Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:13:58Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/afribyt5_en_yor_news | masakhane | 2022-09-24T15:06:09Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:13:36Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/mbart50_en_yor_news | masakhane | 2022-09-24T15:06:07Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:15:22Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/mt5_en_yor_news | masakhane | 2022-09-24T15:06:07Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:14:50Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/mbart50_yor_en_news | masakhane | 2022-09-24T15:06:06Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:15:39Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_yor_news | masakhane | 2022-09-24T15:06:05Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:19:30Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_news | masakhane | 2022-09-24T15:06:05Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:19:46Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_yor_rel_news | masakhane | 2022-09-24T15:06:04Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:20:26Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel | masakhane | 2022-09-24T15:06:02Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:22:03Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_swa_news | masakhane | 2022-09-24T15:06:01Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T09:01:16Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/afrimt5_swa_en_news | masakhane | 2022-09-24T15:06:00Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T09:01:34Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/byt5_en_swa_news | masakhane | 2022-09-24T15:05:57Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:10:38Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/mbart50_en_swa_news | masakhane | 2022-09-24T15:05:56Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:11:58Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/mbart50_swa_en_news | masakhane | 2022-09-24T15:05:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:12:16Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_news | masakhane | 2022-09-24T15:05:54Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:13:30Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_news | masakhane | 2022-09-24T15:05:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:13:13Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_news_ft | masakhane | 2022-09-24T15:05:53Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:13:52Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_news | masakhane | 2022-09-24T15:05:53Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:12:54Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_ft | masakhane | 2022-09-24T15:05:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:14:46Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_news_ft | masakhane | 2022-09-24T15:05:51Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:14:10Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel | masakhane | 2022-09-24T15:05:50Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:15:23Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel | masakhane | 2022-09-24T15:05:50Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:15:07Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_tsn_news | masakhane | 2022-09-24T15:05:49Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:48:47Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/byt5_tsn_en_news | masakhane | 2022-09-24T15:05:47Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:53:15Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/mbart50_en_tsn_news | masakhane | 2022-09-24T15:05:45Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:02:43Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/mt5_en_tsn_news | masakhane | 2022-09-24T15:05:45Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:59:24Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/mt5_tsn_en_news | masakhane | 2022-09-24T15:05:44Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:59:09Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_rel_news | masakhane | 2022-09-24T15:05:42Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:23:45Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_rel_news_ft | masakhane | 2022-09-24T15:05:41Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:33:05Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_rel | masakhane | 2022-09-24T15:05:39Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:38:31Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_twi_news | masakhane | 2022-09-24T15:05:38Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:50:40Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/afrimt5_twi_en_news | masakhane | 2022-09-24T15:05:37Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:50:58Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afrimbart_en_twi_news | masakhane | 2022-09-24T15:05:37Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:53:52Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/afrimbart_twi_en_news | masakhane | 2022-09-24T15:05:36Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:53:34Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afribyt5_twi_en_news | masakhane | 2022-09-24T15:05:35Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:56:34Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/byt5_en_twi_news | masakhane | 2022-09-24T15:05:35Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:02:29Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/mbart50_en_twi_news | masakhane | 2022-09-24T15:05:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:03:38Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/mt5_en_twi_news | masakhane | 2022-09-24T15:05:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:00Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_news | masakhane | 2022-09-24T15:05:30Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:10:15Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel_news | masakhane | 2022-09-24T15:05:30Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:09:58Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_news_ft | masakhane | 2022-09-24T15:05:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:12:51Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_ft | masakhane | 2022-09-24T15:05:27Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:14:47Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel | masakhane | 2022-09-24T15:05:26Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:17:38Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_zul_news | masakhane | 2022-09-24T15:05:24Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:52:03Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/afrimt5_zul_en_news | masakhane | 2022-09-24T15:05:24Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:51:37Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/afribyt5_zul_en_news | masakhane | 2022-09-24T15:05:21Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:57:29Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/byt5_en_zul_news | masakhane | 2022-09-24T15:05:20Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:02:52Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/mbart50_zul_en_news | masakhane | 2022-09-24T15:05:19Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:04:09Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/byt5_zul_en_news | masakhane | 2022-09-24T15:05:19Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:03:09Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mt5_zul_en_news | masakhane | 2022-09-24T15:05:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:24Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mt5_en_zul_news | masakhane | 2022-09-24T15:05:17Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:39Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_news | masakhane | 2022-09-24T15:05:16Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:07:50Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news | masakhane | 2022-09-24T15:05:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:09:23Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel | masakhane | 2022-09-24T15:05:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:18:45Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M-FR-NEWS | masakhane | 2022-09-24T15:05:11Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-02T22:22:31Z | ---
language: fr
license: afl-3.0
---
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
masakhane/m2m100_418M-EN-NEWS | masakhane | 2022-09-24T15:05:11Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"dataset:masakhane/mafand",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-02T22:21:22Z | ---
language: en
license: afl-3.0
datasets:
- masakhane/mafand
---
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
masakhane/m2m100_418M_en_amh_rel | masakhane | 2022-09-24T15:05:10Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"amh",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:06:14Z | ---
language:
- en
- amh
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_kin_rel | masakhane | 2022-09-24T15:05:09Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"kin",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:07:12Z | ---
language:
- en
- kin
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_nya_en_rel | masakhane | 2022-09-24T15:05:08Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"nya",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:08:07Z | ---
language:
- nya
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_sna_en_rel | masakhane | 2022-09-24T15:05:07Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"sna",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:09:06Z | ---
language:
- sna
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_xho_rel | masakhane | 2022-09-24T15:05:06Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"xho",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:09:29Z | ---
language:
- en
- xho
license: cc-by-nc-4.0
---
|
pranavkrishna/bert_amazon | pranavkrishna | 2022-09-24T14:41:05Z | 86 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-09-24T14:40:15Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pranavkrishna/bert_amazon
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pranavkrishna/bert_amazon
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1854
- Validation Loss: 7.6542
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -981, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.1854 | 7.6542 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/repeat | sd-concepts-library | 2022-09-24T14:17:05Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T14:16:59Z | ---
license: mit
---
### REPEAT on Stable Diffusion
This is the `<repeat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
gokuls/BERT-tiny-emotion-intent | gokuls | 2022-09-24T14:11:28Z | 268 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T14:01:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: BERT-tiny-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-tiny-emotion-intent
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2603 | 1.0 | 1000 | 0.7766 | 0.7815 |
| 0.5919 | 2.0 | 2000 | 0.4117 | 0.884 |
| 0.367 | 3.0 | 3000 | 0.3188 | 0.8995 |
| 0.2848 | 4.0 | 4000 | 0.2928 | 0.8985 |
| 0.2395 | 5.0 | 5000 | 0.2906 | 0.898 |
| 0.2094 | 6.0 | 6000 | 0.2887 | 0.907 |
| 0.1884 | 7.0 | 7000 | 0.2831 | 0.9065 |
| 0.1603 | 8.0 | 8000 | 0.3044 | 0.9065 |
| 0.1519 | 9.0 | 9000 | 0.3124 | 0.9095 |
| 0.1291 | 10.0 | 10000 | 0.3256 | 0.9065 |
| 0.1179 | 11.0 | 11000 | 0.3651 | 0.9035 |
| 0.1091 | 12.0 | 12000 | 0.3620 | 0.91 |
| 0.0977 | 13.0 | 13000 | 0.3992 | 0.907 |
| 0.0914 | 14.0 | 14000 | 0.4285 | 0.908 |
| 0.0876 | 15.0 | 15000 | 0.4268 | 0.9055 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
gokuls/distilroberta-emotion-intent | gokuls | 2022-09-24T13:36:17Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T13:26:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilroberta-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-emotion-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4501 | 1.0 | 1000 | 0.2432 | 0.924 |
| 0.1947 | 2.0 | 2000 | 0.1646 | 0.934 |
| 0.1497 | 3.0 | 3000 | 0.1382 | 0.9405 |
| 0.1316 | 4.0 | 4000 | 0.1496 | 0.9435 |
| 0.1145 | 5.0 | 5000 | 0.1684 | 0.9385 |
| 0.1 | 6.0 | 6000 | 0.2342 | 0.943 |
| 0.0828 | 7.0 | 7000 | 0.2807 | 0.939 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Sebabrata/layoutlmv3-finetuned-cord_100 | Sebabrata | 2022-09-24T13:29:13Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-24T12:35:13Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9385640266469282
- name: Recall
type: recall
value: 0.9491017964071856
- name: F1
type: f1
value: 0.9438034983252697
- name: Accuracy
type: accuracy
value: 0.9516129032258065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Precision: 0.9386
- Recall: 0.9491
- F1: 0.9438
- Accuracy: 0.9516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 1.0830 | 0.6854 | 0.7582 | 0.7200 | 0.7725 |
| 1.4266 | 3.12 | 500 | 0.5944 | 0.8379 | 0.8630 | 0.8503 | 0.8680 |
| 1.4266 | 4.69 | 750 | 0.3868 | 0.8828 | 0.9079 | 0.8952 | 0.9155 |
| 0.4084 | 6.25 | 1000 | 0.3146 | 0.9133 | 0.9304 | 0.9218 | 0.9338 |
| 0.4084 | 7.81 | 1250 | 0.2658 | 0.9240 | 0.9371 | 0.9305 | 0.9419 |
| 0.2139 | 9.38 | 1500 | 0.2432 | 0.9299 | 0.9439 | 0.9368 | 0.9474 |
| 0.2139 | 10.94 | 1750 | 0.2333 | 0.9291 | 0.9416 | 0.9353 | 0.9482 |
| 0.1478 | 12.5 | 2000 | 0.2098 | 0.9358 | 0.9491 | 0.9424 | 0.9529 |
| 0.1478 | 14.06 | 2250 | 0.2134 | 0.9379 | 0.9491 | 0.9435 | 0.9516 |
| 0.1124 | 15.62 | 2500 | 0.2144 | 0.9386 | 0.9491 | 0.9438 | 0.9516 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
RebekkaB/rlt_2409_1450 | RebekkaB | 2022-09-24T13:22:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T12:52:36Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rlt_2409_1450
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlt_2409_1450
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- F1: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 36 | 0.5165 | 0.8542 |
| No log | 1.99 | 72 | 0.1459 | 0.9599 |
| No log | 2.99 | 108 | 0.0733 | 0.9882 |
| No log | 3.99 | 144 | 0.1385 | 0.9502 |
| No log | 4.99 | 180 | 0.0948 | 0.9806 |
| No log | 5.99 | 216 | 0.0699 | 0.9822 |
| No log | 6.99 | 252 | 0.0582 | 0.9859 |
| No log | 7.99 | 288 | 0.0340 | 0.9933 |
| No log | 8.99 | 324 | 0.0475 | 0.9826 |
| No log | 9.99 | 360 | 0.0518 | 0.9826 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
gokuls/bert-base-emotion-intent | gokuls | 2022-09-24T13:18:17Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T13:05:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert-base-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9385
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-emotion-intent
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1952
- Accuracy: 0.9385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4058 | 1.0 | 1000 | 0.2421 | 0.9265 |
| 0.1541 | 2.0 | 2000 | 0.1952 | 0.9385 |
| 0.1279 | 3.0 | 3000 | 0.1807 | 0.9345 |
| 0.1069 | 4.0 | 4000 | 0.2292 | 0.9365 |
| 0.081 | 5.0 | 5000 | 0.3315 | 0.936 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
SaurabhKaushik/distilbert-base-uncased-finetuned-ner | SaurabhKaushik | 2022-09-24T12:38:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-24T11:26:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9250386398763524
- name: Recall
type: recall
value: 0.9373531714956931
- name: F1
type: f1
value: 0.9311551925320887
- name: Accuracy
type: accuracy
value: 0.9839388692074285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
- Precision: 0.9250
- Recall: 0.9374
- F1: 0.9312
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2343 | 1.0 | 878 | 0.0674 | 0.9177 | 0.9233 | 0.9205 | 0.9818 |
| 0.0525 | 2.0 | 1756 | 0.0582 | 0.9245 | 0.9362 | 0.9304 | 0.9837 |
| 0.0288 | 3.0 | 2634 | 0.0589 | 0.9250 | 0.9374 | 0.9312 | 0.9839 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/hubris-oshri | sd-concepts-library | 2022-09-24T12:35:06Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T12:35:02Z | ---
license: mit
---
### Hubris-Oshri on Stable Diffusion
This is the `<Hubris>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
RebekkaB/san_2409_1325 | RebekkaB | 2022-09-24T12:13:11Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T11:50:57Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: san_2409_1325
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# san_2409_1325
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0992
- F1: 0.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.91 | 5 | 1.9727 | 0.1939 |
| No log | 1.91 | 10 | 1.5642 | 0.3535 |
| No log | 2.91 | 15 | 1.2698 | 0.6818 |
| No log | 3.91 | 20 | 1.3642 | 0.6429 |
| No log | 4.91 | 25 | 1.3411 | 0.6818 |
| No log | 5.91 | 30 | 1.2627 | 0.6818 |
| No log | 6.91 | 35 | 1.1269 | 0.7727 |
| No log | 7.91 | 40 | 1.0719 | 0.7727 |
| No log | 8.91 | 45 | 1.0567 | 0.7727 |
| No log | 9.91 | 50 | 1.1256 | 0.7727 |
| No log | 10.91 | 55 | 0.7085 | 0.7727 |
| No log | 11.91 | 60 | 0.9290 | 0.7727 |
| No log | 12.91 | 65 | 1.0355 | 0.7727 |
| No log | 13.91 | 70 | 1.0866 | 0.7727 |
| No log | 14.91 | 75 | 1.0992 | 0.7727 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/dr-strange | sd-concepts-library | 2022-09-24T12:11:20Z | 0 | 28 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T12:11:16Z | ---
license: mit
---
### <dr-strange> on Stable Diffusion
This is the `<dr-strange>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
sd-concepts-library/conway-pirate | sd-concepts-library | 2022-09-24T10:44:50Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T10:44:44Z | ---
license: mit
---
### Conway Pirate on Stable Diffusion
This is the `<conway>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
sd-concepts-library/yesdelete | sd-concepts-library | 2022-09-24T09:46:05Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T09:46:01Z | ---
license: mit
---
### yesdelete on Stable Diffusion
This is the `<yesdelete>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
huggingtweets/kingboiwabi | huggingtweets | 2022-09-24T09:35:11Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T09:33:46Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kingboiwabi/1664012106310/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381441602808385538/Sv6H8tsq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">King Wabi The First</div>
<div style="text-align: center; font-size: 14px;">@kingboiwabi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from King Wabi The First.
| Data | King Wabi The First |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 79 |
| Short tweets | 451 |
| Tweets kept | 2710 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lizz96v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kingboiwabi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1twunduv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1twunduv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kingboiwabi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cz_binance | huggingtweets | 2022-09-24T09:16:00Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-06-05T21:10:34Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cz_binance/1664010956441/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572269909513478146/dfyw817W_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CZ πΆ Binance</div>
<div style="text-align: center; font-size: 14px;">@cz_binance</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CZ πΆ Binance.
| Data | CZ πΆ Binance |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 149 |
| Short tweets | 473 |
| Tweets kept | 2624 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19171g9o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cz_binance's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cz_binance')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/coop-himmelblau | sd-concepts-library | 2022-09-24T09:06:36Z | 0 | 6 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T09:06:32Z | ---
license: mit
---
### coop himmelblau on Stable Diffusion
This is the `<coop himmelblau>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
aniketface/DialoGPT-product | aniketface | 2022-09-24T09:05:12Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T08:41:37Z | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
--- |
mlyuya/ddpm-butterflies-128 | mlyuya | 2022-09-24T09:02:29Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-24T07:27:49Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [π€ Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
π [TensorBoard logs](https://huggingface.co/mlyuya/ddpm-butterflies-128/tensorboard?#scalars)
|
huggingtweets/beranewsnetwork | huggingtweets | 2022-09-24T07:04:15Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T07:01:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/beranewsnetwork/1664003049616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445950504102735872/bCnvrgeb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bera News Network</div>
<div style="text-align: center; font-size: 14px;">@beranewsnetwork</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bera News Network.
| Data | Bera News Network |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 579 |
| Tweets kept | 2670 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/254oa32x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beranewsnetwork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beranewsnetwork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/it_airmass | huggingtweets | 2022-09-24T06:49:38Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T06:49:12Z | ---
language: en
thumbnail: http://www.huggingtweets.com/it_airmass/1664002173554/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529248676647944193/-N1UKgKg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Airmass</div>
<div style="text-align: center; font-size: 14px;">@it_airmass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Airmass.
| Data | Airmass |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 126 |
| Short tweets | 370 |
| Tweets kept | 2753 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2f99nys0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_airmass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/it_airmass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marketsmeowmeow | huggingtweets | 2022-09-24T06:43:25Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T06:42:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/marketsmeowmeow/1664001800470/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1570418907575377921/1mTVqZQZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI BOT π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">RB</div>
<div style="text-align: center; font-size: 14px;">@marketsmeowmeow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from RB.
| Data | RB |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 14 |
| Short tweets | 700 |
| Tweets kept | 2530 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/a7yqyg23/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marketsmeowmeow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ou0r1v87) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ou0r1v87/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marketsmeowmeow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/museum-by-coop-himmelblau | sd-concepts-library | 2022-09-24T06:39:31Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T06:39:25Z | ---
license: mit
---
### museum by coop himmelblau on Stable Diffusion
This is the `<coop himmelblau museum>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
BumblingOrange/GuraLv400 | BumblingOrange | 2022-09-24T05:56:03Z | 0 | 10 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2022-09-24T04:58:13Z | ---
license: bigscience-bloom-rail-1.0
---
Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion
Custom Dreambooth model based off of the likeness of Hololive Vtuber Gawr Gura. Dataset was 450 training images, and 900 regularization images. Trained for 3000 steps.
To use the model, simply insert the name 'Gawr Gura' into your prompts. |
sd-concepts-library/guttestreker | sd-concepts-library | 2022-09-24T04:19:49Z | 0 | 11 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T04:19:26Z | ---
license: mit
---
### guttestreker on Stable Diffusion
This is the `<guttestreker>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













|
sd-concepts-library/skyfalls | sd-concepts-library | 2022-09-24T02:09:42Z | 0 | 3 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-19T05:48:35Z | ---
license: mit
---
### SkyFalls on Stable Diffusion
This is the `<SkyFalls>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
HumanCompatibleAI/ppo-AsteroidsNoFrameskip-v4 | HumanCompatibleAI | 2022-09-23T22:37:49Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-23T22:35:00Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1666.00 +/- 472.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **PPO** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env AsteroidsNoFrameskip-v4 -orga HumanCompatibleAI -f logs/
python enjoy.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/ -orga HumanCompatibleAI
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
philschmid/openai-whisper-endpoint | philschmid | 2022-09-23T21:26:56Z | 0 | 11 | generic | [
"generic",
"audio",
"automatic-speech-recognition",
"endpoints-template",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-23T20:27:44Z | ---
license: mit
tags:
- audio
- automatic-speech-recognition
- endpoints-template
library_name: generic
inference: false
---
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for π€ Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
affahrizain/roberta-base-finetuned-jigsaw-toxic | affahrizain | 2022-09-23T21:10:22Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-29T03:46:08Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-finetuned-jigsaw-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-jigsaw-toxic
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0859
- Accuracy: 0.9747
- F1: 0.9746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1179 | 1.0 | 2116 | 0.0982 | 0.9694 | 0.9694 |
| 0.0748 | 2.0 | 4232 | 0.0859 | 0.9747 | 0.9746 |
| 0.0582 | 3.0 | 6348 | 0.0916 | 0.9750 | 0.9750 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
g30rv17ys/ddpm-geeve-cnv-1000-200ep | g30rv17ys | 2022-09-23T19:10:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:29:54Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1000-200ep
## Model description
This diffusion model is trained with the [π€ Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
π [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1000-200ep/tensorboard?#scalars)
|
Subsets and Splits