modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
masakhane/m2m100_418M_en_lug_rel_news_ft | masakhane | 2022-09-24T15:06:28Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"lug",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-05T11:21:04Z | ---
language:
- en
- lug
license: afl-3.0
---
|
masakhane/m2m100_418M_lug_en_rel_news_ft | masakhane | 2022-09-24T15:06:26Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"lug",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-05T11:21:22Z | ---
language:
- lug
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_lug_en_rel_ft | masakhane | 2022-09-24T15:06:26Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"lug",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-05T11:22:09Z | ---
language:
- lug
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_pcm_news | masakhane | 2022-09-24T15:06:23Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-09T13:04:05Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/afrimbart_en_pcm_news | masakhane | 2022-09-24T15:06:22Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-09T13:05:13Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/afribyt5_pcm_en_news | masakhane | 2022-09-24T15:06:21Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T06:41:15Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/byt5_en_pcm_news | masakhane | 2022-09-24T15:06:20Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T06:41:46Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/mt5_pcm_en_news | masakhane | 2022-09-24T15:06:19Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:53:01Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/mt5_en_pcm_news | masakhane | 2022-09-24T15:06:18Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:53:28Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_news | masakhane | 2022-09-24T15:06:17Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:55:58Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_news | masakhane | 2022-09-24T15:06:17Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:56:16Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_news | masakhane | 2022-09-24T15:06:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:56:38Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_ft | masakhane | 2022-09-24T15:06:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"pcm",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:58:06Z | ---
language:
- en
- pcm
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel | masakhane | 2022-09-24T15:06:13Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:58:50Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_news_ft | masakhane | 2022-09-24T15:06:13Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"pcm",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T08:57:44Z | ---
language:
- pcm
- en
license: afl-3.0
---
|
masakhane/afrimbart_yor_en_news | masakhane | 2022-09-24T15:06:11Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:11:53Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afrimt5_yor_en_news | masakhane | 2022-09-24T15:06:11Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:11:28Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afribyt5_yor_en_news | masakhane | 2022-09-24T15:06:10Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:13:17Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afribyt5_en_yor_news | masakhane | 2022-09-24T15:06:09Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:13:36Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/mt5_yor_en_news | masakhane | 2022-09-24T15:06:08Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:14:35Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/byt5_yor_en_news | masakhane | 2022-09-24T15:06:08Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:14:16Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/mbart50_en_yor_news | masakhane | 2022-09-24T15:06:07Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"yor",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:15:22Z | ---
language:
- en
- yor
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel_news | masakhane | 2022-09-24T15:06:06Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:20:08Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/mbart50_yor_en_news | masakhane | 2022-09-24T15:06:06Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:15:39Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_yor_en_rel_ft | masakhane | 2022-09-24T15:06:02Z | 104 | 1 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"yor",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T12:21:44Z | ---
language:
- yor
- en
license: afl-3.0
---
|
masakhane/afrimt5_swa_en_news | masakhane | 2022-09-24T15:06:00Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T09:01:34Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/afrimbart_en_swa_news | masakhane | 2022-09-24T15:05:59Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:09:43Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/byt5_swa_en_news | masakhane | 2022-09-24T15:05:58Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:10:57Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/afribyt5_swa_en_news | masakhane | 2022-09-24T15:05:58Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:10:03Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/mt5_swa_en_news | masakhane | 2022-09-24T15:05:56Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:11:18Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/mbart50_swa_en_news | masakhane | 2022-09-24T15:05:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:12:16Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_news | masakhane | 2022-09-24T15:05:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:12:36Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_news | masakhane | 2022-09-24T15:05:53Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:12:54Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_ft | masakhane | 2022-09-24T15:05:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:14:46Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel_ft | masakhane | 2022-09-24T15:05:52Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:14:29Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/m2m100_418M_swa_en_rel_news_ft | masakhane | 2022-09-24T15:05:51Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"swa",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:14:10Z | ---
language:
- swa
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_swa_rel | masakhane | 2022-09-24T15:05:50Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"swa",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T10:15:23Z | ---
language:
- en
- swa
license: afl-3.0
---
|
masakhane/afrimt5_en_tsn_news | masakhane | 2022-09-24T15:05:49Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:48:47Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/afrimbart_tsn_en_news | masakhane | 2022-09-24T15:05:48Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:49:34Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/byt5_en_tsn_news | masakhane | 2022-09-24T15:05:46Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:52:58Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/afribyt5_en_tsn_news | masakhane | 2022-09-24T15:05:46Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T13:52:35Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/mbart50_tsn_en_news | masakhane | 2022-09-24T15:05:44Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:02:58Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_news | masakhane | 2022-09-24T15:05:43Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:22:56Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_news | masakhane | 2022-09-24T15:05:43Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:20:25Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/m2m100_418M_tsn_en_rel_ft | masakhane | 2022-09-24T15:05:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"tsn",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:32:34Z | ---
language:
- tsn
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_tsn_rel | masakhane | 2022-09-24T15:05:39Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T14:39:01Z | ---
language:
- en
- tsn
license: afl-3.0
---
|
masakhane/afrimt5_twi_en_news | masakhane | 2022-09-24T15:05:37Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:50:58Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afribyt5_en_twi_news | masakhane | 2022-09-24T15:05:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:56:55Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/afrimbart_twi_en_news | masakhane | 2022-09-24T15:05:36Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:53:34Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afribyt5_twi_en_news | masakhane | 2022-09-24T15:05:35Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:56:34Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/byt5_en_twi_news | masakhane | 2022-09-24T15:05:35Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:02:29Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/mt5_twi_en_news | masakhane | 2022-09-24T15:05:33Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:05:43Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_news | masakhane | 2022-09-24T15:05:32Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:07:19Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/mt5_en_twi_news | masakhane | 2022-09-24T15:05:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:00Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel_news | masakhane | 2022-09-24T15:05:30Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:09:58Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_twi_rel_news | masakhane | 2022-09-24T15:05:30Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"twi",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:10:15Z | ---
language:
- en
- twi
license: afl-3.0
---
|
masakhane/m2m100_418M_twi_en_rel | masakhane | 2022-09-24T15:05:26Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"twi",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:17:38Z | ---
language:
- twi
- en
license: afl-3.0
---
|
masakhane/afrimt5_zul_en_news | masakhane | 2022-09-24T15:05:24Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:51:37Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/afrimt5_en_zul_news | masakhane | 2022-09-24T15:05:24Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:52:03Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/afrimbart_zul_en_news | masakhane | 2022-09-24T15:05:22Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:54:35Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/afribyt5_zul_en_news | masakhane | 2022-09-24T15:05:21Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:57:29Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/afribyt5_en_zul_news | masakhane | 2022-09-24T15:05:21Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T08:57:13Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/byt5_en_zul_news | masakhane | 2022-09-24T15:05:20Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:02:52Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/mbart50_en_zul_news | masakhane | 2022-09-24T15:05:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:04:24Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/mt5_en_zul_news | masakhane | 2022-09-24T15:05:17Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:39Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news | masakhane | 2022-09-24T15:05:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:09:23Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news_ft | masakhane | 2022-09-24T15:05:14Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:13:58Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M-EN-NEWS | masakhane | 2022-09-24T15:05:11Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"dataset:masakhane/mafand",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-02T22:21:22Z | ---
language: en
license: afl-3.0
datasets:
- masakhane/mafand
---
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
masakhane/m2m100_418M-FR-NEWS | masakhane | 2022-09-24T15:05:11Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-02T22:22:31Z | ---
language: fr
license: afl-3.0
---
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
masakhane/m2m100_418M_en_amh_rel | masakhane | 2022-09-24T15:05:10Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"amh",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:06:14Z | ---
language:
- en
- amh
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_amh_en_rel | masakhane | 2022-09-24T15:05:10Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"amh",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:05:47Z | ---
language:
- amh
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_kin_en_rel | masakhane | 2022-09-24T15:05:09Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"kin",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:06:42Z | ---
language:
- kin
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_nya_en_rel | masakhane | 2022-09-24T15:05:08Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"nya",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:08:07Z | ---
language:
- nya
- en
license: cc-by-nc-4.0
---
|
rosamondthalken/t5-small-sci-names | rosamondthalken | 2022-09-24T14:39:00Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-16T17:44:50Z | # t5-base-sci-names
Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them.
**t5-small-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words.
You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing).
*Note that this model is still a work in progress. Any feedback is welcome.* |
pere/pk-nb-t5x | pere | 2022-09-24T14:38:59Z | 0 | 2 | null | [
"region:us"
]
| null | 2022-04-01T06:33:23Z | Just a placeholder for a future model |
sd-concepts-library/paolo-bonolis | sd-concepts-library | 2022-09-24T14:36:08Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T13:56:26Z | ---
license: mit
---
### paolo bonolis on Stable Diffusion
This is the `<paolo-bonolis>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/repeat | sd-concepts-library | 2022-09-24T14:17:05Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T14:16:59Z | ---
license: mit
---
### REPEAT on Stable Diffusion
This is the `<repeat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
gokuls/BERT-tiny-emotion-intent | gokuls | 2022-09-24T14:11:28Z | 268 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T14:01:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: BERT-tiny-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-tiny-emotion-intent
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2603 | 1.0 | 1000 | 0.7766 | 0.7815 |
| 0.5919 | 2.0 | 2000 | 0.4117 | 0.884 |
| 0.367 | 3.0 | 3000 | 0.3188 | 0.8995 |
| 0.2848 | 4.0 | 4000 | 0.2928 | 0.8985 |
| 0.2395 | 5.0 | 5000 | 0.2906 | 0.898 |
| 0.2094 | 6.0 | 6000 | 0.2887 | 0.907 |
| 0.1884 | 7.0 | 7000 | 0.2831 | 0.9065 |
| 0.1603 | 8.0 | 8000 | 0.3044 | 0.9065 |
| 0.1519 | 9.0 | 9000 | 0.3124 | 0.9095 |
| 0.1291 | 10.0 | 10000 | 0.3256 | 0.9065 |
| 0.1179 | 11.0 | 11000 | 0.3651 | 0.9035 |
| 0.1091 | 12.0 | 12000 | 0.3620 | 0.91 |
| 0.0977 | 13.0 | 13000 | 0.3992 | 0.907 |
| 0.0914 | 14.0 | 14000 | 0.4285 | 0.908 |
| 0.0876 | 15.0 | 15000 | 0.4268 | 0.9055 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/osaka-jyo | sd-concepts-library | 2022-09-24T13:47:07Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T13:47:03Z | ---
license: mit
---
### osaka jyo on Stable Diffusion
This is the `<osaka-jyo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
gokuls/distilroberta-emotion-intent | gokuls | 2022-09-24T13:36:17Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T13:26:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilroberta-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-emotion-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4501 | 1.0 | 1000 | 0.2432 | 0.924 |
| 0.1947 | 2.0 | 2000 | 0.1646 | 0.934 |
| 0.1497 | 3.0 | 3000 | 0.1382 | 0.9405 |
| 0.1316 | 4.0 | 4000 | 0.1496 | 0.9435 |
| 0.1145 | 5.0 | 5000 | 0.1684 | 0.9385 |
| 0.1 | 6.0 | 6000 | 0.2342 | 0.943 |
| 0.0828 | 7.0 | 7000 | 0.2807 | 0.939 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
RebekkaB/rlt_2409_1450 | RebekkaB | 2022-09-24T13:22:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T12:52:36Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rlt_2409_1450
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlt_2409_1450
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- F1: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 36 | 0.5165 | 0.8542 |
| No log | 1.99 | 72 | 0.1459 | 0.9599 |
| No log | 2.99 | 108 | 0.0733 | 0.9882 |
| No log | 3.99 | 144 | 0.1385 | 0.9502 |
| No log | 4.99 | 180 | 0.0948 | 0.9806 |
| No log | 5.99 | 216 | 0.0699 | 0.9822 |
| No log | 6.99 | 252 | 0.0582 | 0.9859 |
| No log | 7.99 | 288 | 0.0340 | 0.9933 |
| No log | 8.99 | 324 | 0.0475 | 0.9826 |
| No log | 9.99 | 360 | 0.0518 | 0.9826 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
SaurabhKaushik/distilbert-base-uncased-finetuned-ner | SaurabhKaushik | 2022-09-24T12:38:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-24T11:26:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9250386398763524
- name: Recall
type: recall
value: 0.9373531714956931
- name: F1
type: f1
value: 0.9311551925320887
- name: Accuracy
type: accuracy
value: 0.9839388692074285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
- Precision: 0.9250
- Recall: 0.9374
- F1: 0.9312
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2343 | 1.0 | 878 | 0.0674 | 0.9177 | 0.9233 | 0.9205 | 0.9818 |
| 0.0525 | 2.0 | 1756 | 0.0582 | 0.9245 | 0.9362 | 0.9304 | 0.9837 |
| 0.0288 | 3.0 | 2634 | 0.0589 | 0.9250 | 0.9374 | 0.9312 | 0.9839 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/hubris-oshri | sd-concepts-library | 2022-09-24T12:35:06Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T12:35:02Z | ---
license: mit
---
### Hubris-Oshri on Stable Diffusion
This is the `<Hubris>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
sd-concepts-library/yilanov2 | sd-concepts-library | 2022-09-24T12:05:27Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T12:05:22Z | ---
license: mit
---
### <yilanov2> on Stable Diffusion
This is the `<yilanov>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
ckiplab/gpt2-tiny-chinese | ckiplab | 2022-09-24T11:53:54Z | 133 | 5 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"lm-head",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T11:49:21Z | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- gpt2
- zh
license: gpl-3.0
---
# CKIP GPT2 Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/gpt2-tiny-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
RebekkaB/san_nli_2409_1325 | RebekkaB | 2022-09-24T11:50:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T11:27:27Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: san_nli_2409_1325
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# san_nli_2409_1325
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.93 | 10 | 0.2410 | 0.9219 |
| No log | 1.93 | 20 | 0.5240 | 0.9149 |
| No log | 2.93 | 30 | 0.4756 | 0.9219 |
| No log | 3.93 | 40 | 0.3856 | 0.9219 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/ilo-kunst | sd-concepts-library | 2022-09-24T11:33:00Z | 0 | 3 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T11:32:54Z | ---
license: mit
---
### Ilo Kunst on Stable Diffusion
This is the `<ilo-kunst>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
ScriptEdgeAI/MarathiSentiment-Bloom-560m | ScriptEdgeAI | 2022-09-24T08:14:05Z | 102 | 5 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-classification",
"mr",
"Sentiment-Analysis",
"arxiv:2205.14728",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T07:25:34Z | ---
language:
- mr
tags:
- mr
- Sentiment-Analysis
license: cc-by-nc-4.0
widget:
- text: "मला तुम्ही आवडता. मी तुझ्यावर प्रेम करतो."
---
# Marathi-Bloom-560m is a Bloom fine-tuned model trained by ScriptEdge on MahaNLP tweets dataset from L3Cube-MahaNLP.
## Worked on by:
Trained by:
- Venkatesh Soni.
Assistance:
- Rayansh Srivastava.
Supervision:
- Akshay Ugale, Madhukar Alhat.
## Usage -
- It is intended for non-commercial usages.
## Model best metrics
| *Model* | *Data* | *Accuracy* |
|---------------------------------------------------|---------------------|-------------------|
| bigscience/bloom-560m | Validation | 34.7 |
| bigscience/bloom-560m | Test | **34.8** |
| ScriptEdgeAI/MarathiSentiment-Bloom-560m | Validation | 76.0 |
| ScriptEdgeAI/MarathiSentiment-Bloom-560m | Test | **77.0** |
Citation to L3CubePune by the dataset usage.
```
@article {joshi2022l3cube,
title= {L3Cube-MahaNLP: Marathi Natural Language Processing Datasets, Models, and Library},
author= {Joshi, Raviraj},
journal= {arXiv preprint arXiv:2205.14728},
year= {2022}
}
``` |
huggingtweets/pentosh1 | huggingtweets | 2022-09-24T08:03:41Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T08:02:41Z | ---
language: en
thumbnail: http://www.huggingtweets.com/pentosh1/1664006616559/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1553520707472072708/5eseDj4F_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pentoshi 🐧</div>
<div style="text-align: center; font-size: 14px;">@pentosh1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pentoshi 🐧.
| Data | Pentoshi 🐧 |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 24 |
| Short tweets | 573 |
| Tweets kept | 2645 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kzanxqd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pentosh1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e7vuikz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e7vuikz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pentosh1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/beranewsnetwork | huggingtweets | 2022-09-24T07:04:15Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T07:01:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/beranewsnetwork/1664003049616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445950504102735872/bCnvrgeb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bera News Network</div>
<div style="text-align: center; font-size: 14px;">@beranewsnetwork</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bera News Network.
| Data | Bera News Network |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 579 |
| Tweets kept | 2670 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/254oa32x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beranewsnetwork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beranewsnetwork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/it_airmass | huggingtweets | 2022-09-24T06:49:38Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T06:49:12Z | ---
language: en
thumbnail: http://www.huggingtweets.com/it_airmass/1664002173554/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529248676647944193/-N1UKgKg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Airmass</div>
<div style="text-align: center; font-size: 14px;">@it_airmass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Airmass.
| Data | Airmass |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 126 |
| Short tweets | 370 |
| Tweets kept | 2753 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2f99nys0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_airmass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/it_airmass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/museum-by-coop-himmelblau | sd-concepts-library | 2022-09-24T06:39:31Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T06:39:25Z | ---
license: mit
---
### museum by coop himmelblau on Stable Diffusion
This is the `<coop himmelblau museum>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
huggingtweets/inversebrah | huggingtweets | 2022-09-24T06:29:34Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-04-28T20:05:27Z | ---
language: en
thumbnail: http://www.huggingtweets.com/inversebrah/1664000969650/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547362404061052928/WWnVS98w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">smolting (wassie, verse)</div>
<div style="text-align: center; font-size: 14px;">@inversebrah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from smolting (wassie, verse).
| Data | smolting (wassie, verse) |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 1592 |
| Short tweets | 865 |
| Tweets kept | 760 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mt8mw7j5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @inversebrah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37fqg9kh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37fqg9kh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/inversebrah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BumblingOrange/GuraLv400 | BumblingOrange | 2022-09-24T05:56:03Z | 0 | 10 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2022-09-24T04:58:13Z | ---
license: bigscience-bloom-rail-1.0
---
Uses the Waifu Diffusion model as a base, linked here: https://huggingface.co/hakurei/waifu-diffusion
Custom Dreambooth model based off of the likeness of Hololive Vtuber Gawr Gura. Dataset was 450 training images, and 900 regularization images. Trained for 3000 steps.
To use the model, simply insert the name 'Gawr Gura' into your prompts. |
sd-concepts-library/ransom | sd-concepts-library | 2022-09-24T05:44:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T05:44:07Z | ---
license: mit
---
### ransom on Stable Diffusion
This is the `<ransom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:








|
sd-concepts-library/guttestreker | sd-concepts-library | 2022-09-24T04:19:49Z | 0 | 11 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T04:19:26Z | ---
license: mit
---
### guttestreker on Stable Diffusion
This is the `<guttestreker>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













|
nateraw/convnext-tiny-224-finetuned-eurosat-albumentations | nateraw | 2022-09-24T01:57:26Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-09-24T01:44:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat-albumentations
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9814814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Accuracy: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1449 | 1.0 | 190 | 0.1327 | 0.9685 |
| 0.0766 | 2.0 | 380 | 0.0762 | 0.9774 |
| 0.0493 | 3.0 | 570 | 0.0608 | 0.9815 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/tim_cook | huggingtweets | 2022-09-24T01:11:00Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tim_cook/1663981855625/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535420431766671360/Pwq-1eJc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tim Cook</div>
<div style="text-align: center; font-size: 14px;">@tim_cook</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tim Cook.
| Data | Tim Cook |
| --- | --- |
| Tweets downloaded | 1385 |
| Retweets | 20 |
| Short tweets | 13 |
| Tweets kept | 1352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d94dtsh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tim_cook's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19bm0x3l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19bm0x3l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tim_cook')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
neelmehta00/t5-small-finetuned-eli5-neel | neelmehta00 | 2022-09-23T23:44:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T22:36:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-small-finetuned-eli5-neel
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 9.613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5-neel
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6887
- Rouge1: 9.613
- Rouge2: 1.7491
- Rougel: 8.8341
- Rougelsum: 9.3402
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.896 | 1.0 | 17040 | 3.6887 | 9.613 | 1.7491 | 8.8341 | 9.3402 | 19.0 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/david-martinez-edgerunners | sd-concepts-library | 2022-09-23T23:42:30Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-23T23:42:24Z | ---
license: mit
---
### David Martinez Edgerunners on Stable Diffusion
This is the `<david-martinez-edgerunners>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



























|
Subsets and Splits