modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
google/t5-efficient-large-kv128 | 4843f8bd75fc0cbb4e912c032dfb8ec3222f590e | 2022-02-15T10:55:13.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-large-kv128 | 7 | null | transformers | 14,100 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-LARGE-KV128 (Deep-Narrow version)
T5-Efficient-LARGE-KV128 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-kv128** - is of model type **Large** with the following variations:
- **kv** is **128**
It has **1039.71** million parameters and thus requires *ca.* **4158.86 MB** of memory in full precision (*fp32*)
or **2079.43 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-large-nh8-nl32 | 3bc9149e42053e256eacf4d5587d3753a66ee584 | 2022-02-15T10:49:28.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-large-nh8-nl32 | 7 | null | transformers | 14,101 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-LARGE-NH8-NL32 (Deep-Narrow version)
T5-Efficient-LARGE-NH8-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-nh8-nl32** - is of model type **Large** with the following variations:
- **nh** is **8**
- **nl** is **32**
It has **771.34** million parameters and thus requires *ca.* **3085.35 MB** of memory in full precision (*fp32*)
or **1542.68 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-el16-dl4 | 622624c8eacaa57b94ba33ebd03cab41a3b616a4 | 2022-02-15T10:56:59.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-el16-dl4 | 7 | null | transformers | 14,102 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-EL16-DL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL16-DL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el16-dl4** - is of model type **Small** with the following variations:
- **el** is **16**
- **dl** is **4**
It has **83.6** million parameters and thus requires *ca.* **334.41 MB** of memory in full precision (*fp32*)
or **167.21 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-el48 | b879ae2064dec59d082fbb731161e00344a3f67a | 2022-02-15T10:57:34.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-el48 | 7 | 0 | transformers | 14,103 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-EL48 (Deep-Narrow version)
T5-Efficient-SMALL-EL48 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el48** - is of model type **Small** with the following variations:
- **el** is **48**
It has **192.72** million parameters and thus requires *ca.* **770.89 MB** of memory in full precision (*fp32*)
or **385.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-ff9000 | 4c81c279bb1202431a0cadd3968a15eba5e9ae2d | 2022-02-15T10:50:27.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-ff9000 | 7 | null | transformers | 14,104 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-FF9000 (Deep-Narrow version)
T5-Efficient-SMALL-FF9000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-ff9000** - is of model type **Small** with the following variations:
- **ff** is **9000**
It has **148.6** million parameters and thus requires *ca.* **594.41 MB** of memory in full precision (*fp32*)
or **297.2 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-kv32 | fdb3a94c9852195ac01dbf12771f476e30d85471 | 2022-02-15T10:50:37.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-kv32 | 7 | null | transformers | 14,105 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-KV32 (Deep-Narrow version)
T5-Efficient-SMALL-KV32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-kv32** - is of model type **Small** with the following variations:
- **kv** is **32**
It has **51.08** million parameters and thus requires *ca.* **204.34 MB** of memory in full precision (*fp32*)
or **102.17 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-tiny-ff9000 | 7691325582efff3985f0ebdc3d5dfbf877e3addd | 2022-02-15T10:54:33.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-tiny-ff9000 | 7 | null | transformers | 14,106 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-TINY-FF9000 (Deep-Narrow version)
T5-Efficient-TINY-FF9000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-ff9000** - is of model type **Tiny** with the following variations:
- **ff** is **9000**
It has **49.13** million parameters and thus requires *ca.* **196.54 MB** of memory in full precision (*fp32*)
or **98.27 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-tiny-nl8 | 8a4dca07e66c09cb83da418273356a414913a2a6 | 2022-02-15T10:51:50.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-tiny-nl8 | 7 | 5 | transformers | 14,107 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-TINY-NL8 (Deep-Narrow version)
T5-Efficient-TINY-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-nl8** - is of model type **Tiny** with the following variations:
- **nl** is **8**
It has **22.93** million parameters and thus requires *ca.* **91.74 MB** of memory in full precision (*fp32*)
or **45.87 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-xxl-ssm-tqa | afcb673098d62bbbaf806d17b8f1560aa5131a30 | 2020-12-07T08:39:06.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:trivia_qa",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-xxl-ssm-tqa | 7 | null | transformers | 14,108 | ---
language: en
datasets:
- c4
- wikipedia
- trivia_qa
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa).
**Note**: The model was fine-tuned on 100% of the train splits of [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa) for 10 steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Trivia QA - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-11b|https://huggingface.co/google/t5-large-ssm-tqa|60.5|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-tqa**|**61.6**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-tqa")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-tqa")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/tapas-mini-masklm | 8e943a3a3e8d25204f316f2aaacc0983b84a5aa8 | 2021-11-29T14:15:38.000Z | [
"pytorch",
"tf",
"tapas",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | google | null | google/tapas-mini-masklm | 7 | null | transformers | 14,109 | This model corresponds to **tapas_masklm_mini_reset** of the [original repository](https://github.com/google-research/tapas).
Here's how you can use it:
```python
from transformers import TapasTokenizer, TapasForMaskedLM
import pandas as pd
import torch
tokenizer = TapasTokenizer.from_pretrained("google/tapas-mini-masklm")
model = TapasForMaskedLM.from_pretrained("google/tapas-mini-masklm")
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"]
}
table = pd.DataFrame.from_dict(data)
query = "How many movies has Leonardo [MASK] Caprio played in?"
# prepare inputs
inputs = tokenizer(table=table, queries=query, padding="max_length", return_tensors="pt")
# forward pass
outputs = model(**inputs)
# return top 5 values and predictions
masked_index = torch.nonzero(inputs.input_ids.squeeze() == tokenizer.mask_token_id, as_tuple=False)
logits = outputs.logits[0, masked_index.item(), :]
probs = logits.softmax(dim=0)
values, predictions = probs.topk(5)
for value, pred in zip(values, predictions):
print(f"{tokenizer.decode([pred])} with confidence {value}")
``` |
hamzab/codebert_code_search | 3d9078c581f17bcaecca823fa3b96ae4495493a6 | 2021-09-30T10:20:44.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | hamzab | null | hamzab/codebert_code_search | 7 | null | transformers | 14,110 | Entry not found |
haotieu/en-vi-mt-model | 97c5817edaf4ae2cdf161632c3897ced4304e273 | 2021-12-19T10:17:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | haotieu | null | haotieu/en-vi-mt-model | 7 | 1 | transformers | 14,111 | # Helsinki-NLP/opus-mt-en-vi
- This model is a fine-tune checkpoint of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi).
- This model reaches BLEU score = 33.086 on the test set of IWSLT'15 English-Vietnamese data.
# Fine-tuning hyper-parameters
- learning_rate = 1e-4
- batch_size = 4
- num_train_epochs = 3.0 |
harithapliyal/distilbert-base-uncased-finetuned-cola | 8d5a07a64338385fe0a732a62ec820495aa6b34e | 2022-01-18T18:44:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | harithapliyal | null | harithapliyal/distilbert-base-uncased-finetuned-cola | 7 | null | transformers | 14,112 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5601668097291345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5475 | 0.4039 |
| 0.3482 | 2.0 | 1070 | 0.5140 | 0.5004 |
| 0.2408 | 3.0 | 1605 | 0.6472 | 0.5264 |
| 0.1765 | 4.0 | 2140 | 0.7456 | 0.5403 |
| 0.1314 | 5.0 | 2675 | 0.8366 | 0.5602 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
hf-internal-testing/tiny-random-roformer | fdea99c900cd5dea04443df3a60af5ba4381a002 | 2021-09-17T19:21:29.000Z | [
"pytorch",
"tf",
"roformer",
"transformers"
]
| null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-roformer | 7 | null | transformers | 14,113 | Entry not found |
huggingtweets/discountpicasso-dril-liam_100000 | 3c8447c01f96b4742dda5a4971ee6eb5b5ec3be0 | 2021-09-07T00:14:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/discountpicasso-dril-liam_100000 | 7 | null | transformers | 14,114 | ---
language: en
thumbnail: https://www.huggingtweets.com/discountpicasso-dril-liam_100000/1630973640579/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/980964012170121217/U6FjPH4H_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LIAM & wint & Picasso</div>
<div style="text-align: center; font-size: 14px;">@discountpicasso-dril-liam_100000</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LIAM & wint & Picasso.
| Data | LIAM | wint | Picasso |
| --- | --- | --- | --- |
| Tweets downloaded | 1962 | 3226 | 3216 |
| Retweets | 135 | 472 | 427 |
| Short tweets | 435 | 313 | 421 |
| Tweets kept | 1392 | 2441 | 2368 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w4ekve8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discountpicasso-dril-liam_100000's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/discountpicasso-dril-liam_100000')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/emmanuelmacron | f82bb3b3192cf9988417de5379dca9190fb76317 | 2022-06-12T14:23:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/emmanuelmacron | 7 | null | transformers | 14,115 | ---
language: en
thumbnail: http://www.huggingtweets.com/emmanuelmacron/1655043824718/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502232882504421380/8brWTaNo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Emmanuel Macron</div>
<div style="text-align: center; font-size: 14px;">@emmanuelmacron</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Emmanuel Macron.
| Data | Emmanuel Macron |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 230 |
| Short tweets | 67 |
| Tweets kept | 2953 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28yawqaq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emmanuelmacron's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ut58xfqh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ut58xfqh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/emmanuelmacron')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/goatlich-yagisabi | 09acba15de71d7c625f74548f27e78c48c9e777d | 2021-06-23T19:16:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/goatlich-yagisabi | 7 | null | transformers | 14,116 | ---
language: en
thumbnail: https://www.huggingtweets.com/goatlich-yagisabi/1624475783796/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1406546124605898752/YRmbl1wc_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1389328774085365767/QFuxMWoj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gay Shawn 🏳️🌈 & 🔻L O W R Y 🔻</div>
<div style="text-align: center; font-size: 14px;">@goatlich-yagisabi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gay Shawn 🏳️🌈 & 🔻L O W R Y 🔻.
| Data | Gay Shawn 🏳️🌈 | 🔻L O W R Y 🔻 |
| --- | --- | --- |
| Tweets downloaded | 406 | 3156 |
| Retweets | 67 | 390 |
| Short tweets | 50 | 214 |
| Tweets kept | 289 | 2552 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wtnxwy1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goatlich-yagisabi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qrbyfgtb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qrbyfgtb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goatlich-yagisabi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/incharmuese-sadsocrates-vvangone | d3fb30e8b76bc90cad9cb2fe4b89346b270a368a | 2021-10-29T15:35:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/incharmuese-sadsocrates-vvangone | 7 | null | transformers | 14,117 | ---
language: en
thumbnail: https://www.huggingtweets.com/incharmuese-sadsocrates-vvangone/1635521727120/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/581592941124153346/5nfUJyU2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/561419401145376768/7OIwxUCC_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190256978007904257/TsXH7_nP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charmeuse & Sad Socrates & Vincent Van Gone</div>
<div style="text-align: center; font-size: 14px;">@incharmuese-sadsocrates-vvangone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charmeuse & Sad Socrates & Vincent Van Gone.
| Data | Charmeuse | Sad Socrates | Vincent Van Gone |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 3197 | 3233 |
| Retweets | 1165 | 40 | 1054 |
| Short tweets | 248 | 161 | 266 |
| Tweets kept | 1825 | 2996 | 1913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ochftk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @incharmuese-sadsocrates-vvangone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/incharmuese-sadsocrates-vvangone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jamescharles-loganpaul-tanamongeau | 159be6f4ac4e4c6dbbb767089f4505959216ab47 | 2021-09-14T05:53:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/jamescharles-loganpaul-tanamongeau | 7 | null | transformers | 14,118 | ---
language: en
thumbnail: https://www.huggingtweets.com/jamescharles-loganpaul-tanamongeau/1631598787303/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1420806762408464385/10y3M0iO_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1324782032124215296/HMG6-q8g_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1401837042934468611/okzqIoMb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CANCELLED & James Charles & Logan Paul</div>
<div style="text-align: center; font-size: 14px;">@jamescharles-loganpaul-tanamongeau</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CANCELLED & James Charles & Logan Paul.
| Data | CANCELLED | James Charles | Logan Paul |
| --- | --- | --- | --- |
| Tweets downloaded | 3167 | 3182 | 3246 |
| Retweets | 938 | 480 | 98 |
| Short tweets | 522 | 496 | 287 |
| Tweets kept | 1707 | 2206 | 2861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2avr905u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamescharles-loganpaul-tanamongeau's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2at101p1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2at101p1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamescharles-loganpaul-tanamongeau')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/magicjohnson | 5633baaf228d4a84b383c377ed4f09894de93160 | 2021-06-04T20:32:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/magicjohnson | 7 | null | transformers | 14,119 | ---
language: en
thumbnail: https://www.huggingtweets.com/magicjohnson/1622838726917/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1090357359782768640/ITPFaU3F_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Earvin Magic Johnson</div>
<div style="text-align: center; font-size: 14px;">@magicjohnson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Earvin Magic Johnson.
| Data | Earvin Magic Johnson |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 103 |
| Short tweets | 94 |
| Tweets kept | 3053 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7g3n70f6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magicjohnson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gdznqoo2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gdznqoo2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magicjohnson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marknorm | 31482ba1aff27755809f46cb1166b85026ea1e5f | 2021-05-31T19:21:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/marknorm | 7 | null | transformers | 14,120 | ---
language: en
thumbnail: https://www.huggingtweets.com/marknorm/1622488902602/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/903769803768217600/EKtan_aM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mark normand</div>
<div style="text-align: center; font-size: 14px;">@marknorm</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mark normand.
| Data | mark normand |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 136 |
| Short tweets | 522 |
| Tweets kept | 2591 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25e2ma2z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marknorm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/17zjqoal) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/17zjqoal/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marknorm')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/robber0540 | baab29807b1631aae136ce512546cfa32616ffbb | 2021-06-13T21:47:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/robber0540 | 7 | null | transformers | 14,121 | ---
language: en
thumbnail: https://www.huggingtweets.com/robber0540/1623620847015/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/822229503212666880/L4UutyTM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Combat Ballerina</div>
<div style="text-align: center; font-size: 14px;">@robber0540</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Combat Ballerina.
| Data | Combat Ballerina |
| --- | --- |
| Tweets downloaded | 668 |
| Retweets | 65 |
| Short tweets | 303 |
| Tweets kept | 300 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2se37abl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @robber0540's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3i4yohh5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3i4yohh5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/robber0540')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hyunwoongko/zhberta-base-zh-xnli | db31bc3c8767c074f1dcb062cc98e8c9020f149d | 2021-05-20T16:45:43.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | hyunwoongko | null | hyunwoongko/zhberta-base-zh-xnli | 7 | null | transformers | 14,122 | Entry not found |
ielab/unicoil-tilde200-msmarco-passage | dcfb821ee06326af2fbec9930a8da1d696154553 | 2021-10-31T13:50:01.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | ielab | null | ielab/unicoil-tilde200-msmarco-passage | 7 | null | transformers | 14,123 | uniCOIL trained with passages expand with TILDE (m=200) |
inspectorsolaris/gpt2_french_pre_trained | e3dfc872605c180df2a4a396bf27d1ef4a0b4cef | 2021-05-27T04:31:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | inspectorsolaris | null | inspectorsolaris/gpt2_french_pre_trained | 7 | null | transformers | 14,124 | Entry not found |
it5/it5-small-formal-to-informal | 9b795e56c1c7ff3dc044b279148586465f49b405 | 2022-03-09T07:45:14.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-small-formal-to-informal | 7 | null | transformers | 14,125 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance è a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciò possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: it5-small-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.650
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.450
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.631
name: "Avg. Test RougeL"
- type: bertscore
value: 0.663
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "8g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# IT5 Small for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/it5-small-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jambo/marker-associations-snp-binary-base | 16ef812cc58058c2438efdd610801ea4cb2c9573 | 2021-11-02T13:00:57.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:marker-associations-snp-binary-base",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | jambo | null | jambo/marker-associations-snp-binary-base | 7 | null | transformers | 14,126 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- marker-associations-snp-binary-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: marker-associations-snp-binary-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: marker-associations-snp-binary-base
type: marker-associations-snp-binary-base
metrics:
- name: Precision
type: precision
value: 0.9384057971014492
- name: Recall
type: recall
value: 0.9055944055944056
- name: F1
type: f1
value: 0.9217081850533808
- name: Accuracy
type: accuracy
value: 0.9107505070993914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marker-associations-snp-binary-base
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the marker-associations-snp-binary-base dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4027
- Precision: 0.9384
- Recall: 0.9056
- F1: 0.9217
- Accuracy: 0.9108
- Auc: 0.9578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------:|
| No log | 1.0 | 153 | 0.2776 | 0.9 | 0.9441 | 0.9215 | 0.9067 | 0.9613 |
| No log | 2.0 | 306 | 0.4380 | 0.9126 | 0.9126 | 0.9126 | 0.8986 | 0.9510 |
| No log | 3.0 | 459 | 0.4027 | 0.9384 | 0.9056 | 0.9217 | 0.9108 | 0.9578 |
| 0.2215 | 4.0 | 612 | 0.3547 | 0.9449 | 0.8986 | 0.9211 | 0.9108 | 0.9642 |
| 0.2215 | 5.0 | 765 | 0.4465 | 0.9107 | 0.9266 | 0.9185 | 0.9047 | 0.9636 |
| 0.2215 | 6.0 | 918 | 0.5770 | 0.8970 | 0.9441 | 0.9199 | 0.9047 | 0.9666 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
jamescalam/bert-stsb-cross-encoder | 6d997398b988f0820a185b8d02eb6cdec177ef0b | 2021-12-17T08:54:27.000Z | [
"pytorch",
"bert",
"text-classification",
"sentence-transformers",
"sentence-similarity",
"transformers",
"cross-encoder"
]
| sentence-similarity | false | jamescalam | null | jamescalam/bert-stsb-cross-encoder | 7 | null | sentence-transformers | 14,127 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
- cross-encoder
---
# Augmented SBERT STSb
This is a [sentence-transformers](https://www.SBERT.net) cross encoder model.
It is used as a demo model within the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp), for the chapter on [In-domain Data Augmentation with BERT](https://www.pinecone.io/learn/data-augmentation/).
|
jamiewjm/CCGwGPT2 | a14767871110389b3a51d2d0ff0c4fd7cc8b8c27 | 2021-11-13T03:46:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"Chinese",
"transformers"
]
| text-generation | false | jamiewjm | null | jamiewjm/CCGwGPT2 | 7 | null | transformers | 14,128 | ---
language: Chinese
widget:
- text: "五言藏头:春天到来|桃花|"
---
# 输入格式
> 格式|题目|正文
格式为下列之一:
* 五绝
* 五律
* 七绝
* 七律
* 五言排律
* 七言排律
* 五言藏头:藏头字...
* 七言藏头:藏头字...
* 对联
----- removed -----
* 诗经
* 乐府
* 楚辞
* 词牌名 (水调歌头、菩萨蛮...)
* 古诗
若为空则默认五言绝句。
题目为诗歌的主题,可为空。
正文部分可指定起始字符,可为空 |
jcblaise/electra-tagalog-small-uncased-generator | 67b78e895fd8960a45a484d69583bccb6d5be564 | 2021-11-11T06:17:14.000Z | [
"pytorch",
"electra",
"fill-mask",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0",
"autotrain_compatible"
]
| fill-mask | false | jcblaise | null | jcblaise/electra-tagalog-small-uncased-generator | 7 | null | transformers | 14,129 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Small Uncased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jhgan/ko-sroberta-sts | 505abda37a6e7c6816285f83d13e1cdc3286d586 | 2021-12-27T13:09:43.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | jhgan | null | jhgan/ko-sroberta-sts | 7 | null | sentence-transformers | 14,130 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ko-sroberta-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sroberta-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-sts')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.84
- Cosine Spearman: 81.82
- Euclidean Pearson: 81.15
- Euclidean Spearman: 81.25
- Manhattan Pearson: 81.14
- Manhattan Spearman: 81.25
- Dot Pearson: 79.09
- Dot Spearman: 78.54
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ji-xin/roberta_base-SST2-two_stage | e5e642a290d0063767cf88134375773c86b5e7e8 | 2020-07-08T15:09:27.000Z | [
"pytorch",
"transformers"
]
| null | false | ji-xin | null | ji-xin/roberta_base-SST2-two_stage | 7 | null | transformers | 14,131 | Entry not found |
joelito/distilbert-based-german-cased-ler | d0c724b564eb53fd4745ee643792e4a67747a662 | 2020-11-30T12:52:05.000Z | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | joelito | null | joelito/distilbert-based-german-cased-ler | 7 | null | transformers | 14,132 | # distilbert-base-german-cased-ler
Task: ler
Base Model: distilbert-base-german-cased
Trained for 3 epochs
Batch-size: 12
Seed: 42
Test F1-Score: 0.936 |
juliamendelsohn/framing_immigration_specific | 2dc5de08001a3bcc74bbc6b5b17452af977ce9f1 | 2021-05-20T17:27:05.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | juliamendelsohn | null | juliamendelsohn/framing_immigration_specific | 7 | null | transformers | 14,133 | Entry not found |
juliusco/biobert-base-cased-v1.1-squad-finetuned-covbiobert | 0ea62665d5989ccbf0164e19430bf27c6a252d7e | 2021-12-20T07:58:26.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | juliusco | null | juliusco/biobert-base-cased-v1.1-squad-finetuned-covbiobert | 7 | null | transformers | 14,134 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: biobert-base-cased-v1.1-squad-finetuned-covbiobert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-squad-finetuned-covbiobert
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-squad) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 486 | 0.3787 |
| 0.161 | 2.0 | 972 | 0.3959 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
junnyu/roformer_base_wwm_cluecorpussmall | 82af7eac5be744ed4fd8db3db6a1d5763230e3b0 | 2022-02-10T12:26:39.000Z | [
"pytorch",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"tf2.0",
"paddlepaddle",
"autotrain_compatible"
]
| fill-mask | false | junnyu | null | junnyu/roformer_base_wwm_cluecorpussmall | 7 | 1 | transformers | 14,135 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
- paddlepaddle
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
Pretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.
在13g的clue corpus small数据集上进行的预训练,使用了`Whole Mask LM` 和 `SOP` 任务
训练逻辑参考了这里。https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/ernie-1.0
## 训练细节:
- paddlepaddle+paddlenlp
- V100 x 4
- batch size 256
- max_seq_len 512
- max_lr 0.0001
- min_lr 0.00001
- weight_decay 0.01
- grad_clip 1.0
- 总共训练的句子```128*30w + 256*15w + 256*14.5w + 256*46.5w + 256*17w = 27648w```
- 约等于512 batch size, 100w步条件下的54%
最终loss:
```python
[2022-02-05 16:05:59,067] [ INFO] - global step 170100, loss: 2.651634932, lm_loss: 2.603405, sop_loss: 0.048229, speed: 1.06 steps/s, ips: 271.68 seqs/s, learning rate: 6.66465e-05, loss_scaling: 137438.96875, num_good_steps: 356, num_bad_steps: 0
[2022-02-05 16:07:28,227] [ INFO] - global step 170200, loss: 2.822231531, lm_loss: 2.662831, sop_loss: 0.159401, speed: 1.12 steps/s, ips: 287.13 seqs/s, learning rate: 6.66263e-05, loss_scaling: 137438.96875, num_good_steps: 59, num_bad_steps: 0
[2022-02-05 16:08:57,346] [ INFO] - global step 170300, loss: 2.710968971, lm_loss: 2.673646, sop_loss: 0.037323, speed: 1.12 steps/s, ips: 287.26 seqs/s, learning rate: 6.66061e-05, loss_scaling: 137438.96875, num_good_steps: 159, num_bad_steps: 0
[2022-02-05 16:10:26,698] [ INFO] - global step 170400, loss: 2.867662907, lm_loss: 2.619032, sop_loss: 0.248631, speed: 1.12 steps/s, ips: 286.51 seqs/s, learning rate: 6.65859e-05, loss_scaling: 137438.96875, num_good_steps: 259, num_bad_steps: 0
[2022-02-05 16:11:55,714] [ INFO] - global step 170500, loss: 3.158756495, lm_loss: 2.953678, sop_loss: 0.205079, speed: 1.12 steps/s, ips: 287.59 seqs/s, learning rate: 6.65657e-05, loss_scaling: 137438.96875, num_good_steps: 359, num_bad_steps: 0
[2022-02-05 16:13:24,869] [ INFO] - global step 170600, loss: 2.860815048, lm_loss: 2.754750, sop_loss: 0.106064, speed: 1.12 steps/s, ips: 287.14 seqs/s, learning rate: 6.65455e-05, loss_scaling: 137438.96875, num_good_steps: 33, num_bad_steps: 0
```
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, BertTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||人||气||阳||雨]很好,我[想||就||要||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
kamivao/autonlp-entity_selection-5771228 | 3d27f324393f5a8260411a61b2a2b37b4a764ed7 | 2021-07-24T18:59:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:kamivao/autonlp-data-entity_selection",
"transformers",
"autonlp"
]
| text-classification | false | kamivao | null | kamivao/autonlp-entity_selection-5771228 | 7 | null | transformers | 14,136 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kamivao/autonlp-data-entity_selection
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 5771228
## Validation Metrics
- Loss: 0.17127291858196259
- Accuracy: 0.9206671174216813
- Precision: 0.9588885738588036
- Recall: 0.9423237670660352
- AUC: 0.9720189638675828
- F1: 0.9505340078695896
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-entity_selection-5771228
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
katrin-kc/bert-finetuned-imdb | 802918aa5a6939e8f52cce5a8a127b39ce17b354 | 2022-01-28T07:51:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | katrin-kc | null | katrin-kc/bert-finetuned-imdb | 7 | null | transformers | 14,137 | Entry not found |
keyonvafa/compatible-gpt2 | 90be6450674313fd923ded8618d51f08175f911b | 2021-09-10T22:16:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | keyonvafa | null | keyonvafa/compatible-gpt2 | 7 | 1 | transformers | 14,138 | Entry not found |
khizon/distilbert-unreliable-news-eng-4L | 8e344f5172b7b26e9bb8a144c25942d891114a48 | 2022-01-15T07:06:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | khizon | null | khizon/distilbert-unreliable-news-eng-4L | 7 | null | transformers | 14,139 | # Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `distilbert-base-cased` as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having **2.0x** the speed.
For more details: [Github](https://github.com/khizon/CS284_final_project) |
kinit/slovakbert-sentiment-twitter | 1b3794da24b923e824caf91c19f584d0695d8688 | 2021-11-30T10:51:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"sk",
"arxiv:2109.15254",
"transformers",
"twitter",
"sentiment-analysis",
"license:cc"
]
| text-classification | false | kinit | null | kinit/slovakbert-sentiment-twitter | 7 | 1 | transformers | 14,140 | ---
language:
- sk
tags:
- twitter
- sentiment-analysis
license: cc
metrics:
- f1
widget:
- text: "Najkrajšia vianočná reklama: Toto milé video vám vykúzli čarovnú atmosféru: Vianoce sa nezadržateľne blížia."
- text: "A opäť sa objavili nebezpečné výrobky. Pozrite sa, či ich nemáte doma"
---
# Sentiment Analysis model based on SlovakBERT
This is a sentiment analysis classifier based on [SlovakBERT](https://huggingface.co/gerulata/slovakbert). The model can distinguish three level of sentiment:
- `-1` - Negative sentiment
- `0` - Neutral sentiment
- `1` - Positive setiment
The model was fine-tuned using Slovak part of [Multilingual Twitter Sentiment Analysis Dataset](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155036) [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis.
## Results
The model was evaluated in [our paper](https://arxiv.org/abs/2109.15254) [Pikuliak et al 2021, Section 4.4]. It achieves \\(0.67\\) F1-score on the original dataset and \\(0.58\\) F1-score on general reviews dataset.
## Cite
```
@article{DBLP:journals/corr/abs-2109-15254,
author = {Mat{\'{u}}{\v{s}} Pikuliak and
{\v{S}}tefan Grivalsk{\'{y}} and
Martin Kon{\^{o}}pka and
Miroslav Bl{\v{s}}t{\'{a}}k and
Martin Tamajka and
Viktor Bachrat{\'{y}} and
Mari{\'{a}}n {\v{S}}imko and
Pavol Bal{\'{a}}{\v{z}}ik and
Michal Trnka and
Filip Uhl{\'{a}}rik},
title = {SlovakBERT: Slovak Masked Language Model},
journal = {CoRR},
volume = {abs/2109.15254},
year = {2021},
url = {https://arxiv.org/abs/2109.15254},
eprinttype = {arXiv},
eprint = {2109.15254},
}
``` |
kmfoda/description_generator_new | 7595a43d7d5c1012571cf090acc5f2c8b623d447 | 2021-10-06T14:42:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | kmfoda | null | kmfoda/description_generator_new | 7 | null | transformers | 14,141 | Entry not found |
krevas/finance-koelectra-small-discriminator | df81b5efc4cad7e947f4bf5fbca3391cd5fac223 | 2020-12-11T21:48:34.000Z | [
"pytorch",
"electra",
"pretraining",
"ko",
"transformers"
]
| null | false | krevas | null | krevas/finance-koelectra-small-discriminator | 7 | null | transformers | 14,142 | ---
language: ko
---
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator")
sentence = "내일 해당 종목이 대폭 상승할 것이다"
fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
leonardvorbeck/wav2vec2-large-robust-LS960 | 35876357e7a0140432e08779d1beab8534166343 | 2021-08-26T12:22:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"transformers",
"speech",
"CTC",
"Attention",
"license:apache-2.0"
]
| automatic-speech-recognition | false | leonardvorbeck | null | leonardvorbeck/wav2vec2-large-robust-LS960 | 7 | 1 | transformers | 14,143 | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
tags:
- speech
- automatic-speech-recognition
- CTC
- Attention
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
```ipython
with torch.no_grad():
model(torch.randn((1,300_000)))
```
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
lewtun/bert-finetuned-ner | 71a0699e8ada896c55abde8c399d9e141588be13 | 2021-11-14T15:34:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | lewtun | null | lewtun/bert-finetuned-ner | 7 | null | transformers | 14,144 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9407949442873773
- name: Recall
type: recall
value: 0.9520363513968361
- name: F1
type: f1
value: 0.9463822668339608
- name: Accuracy
type: accuracy
value: 0.9865485371166186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9408
- Recall: 0.9520
- F1: 0.9464
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0884 | 1.0 | 1756 | 0.0658 | 0.9145 | 0.9337 | 0.9240 | 0.9827 |
| 0.0375 | 2.0 | 3512 | 0.0618 | 0.9366 | 0.9490 | 0.9427 | 0.9864 |
| 0.0216 | 3.0 | 5268 | 0.0603 | 0.9408 | 0.9520 | 0.9464 | 0.9865 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8 | 30a83cc824f94acab1dca2fa9cb7e793118206c9 | 2022-02-10T23:23:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | lgris | null | lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8 | 7 | null | transformers | 14,145 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv8
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Wer: 0.1365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5614 | 0.1 | 100 | 0.2542 | 0.1986 |
| 0.5181 | 0.19 | 200 | 0.2740 | 0.2146 |
| 0.5056 | 0.29 | 300 | 0.2472 | 0.2068 |
| 0.4747 | 0.39 | 400 | 0.2464 | 0.2166 |
| 0.4627 | 0.48 | 500 | 0.2277 | 0.2041 |
| 0.4403 | 0.58 | 600 | 0.2245 | 0.1977 |
| 0.4413 | 0.68 | 700 | 0.2156 | 0.1968 |
| 0.437 | 0.77 | 800 | 0.2102 | 0.1919 |
| 0.4305 | 0.87 | 900 | 0.2130 | 0.1864 |
| 0.4324 | 0.97 | 1000 | 0.2144 | 0.1902 |
| 0.4217 | 1.06 | 1100 | 0.2230 | 0.1891 |
| 0.3823 | 1.16 | 1200 | 0.2033 | 0.1774 |
| 0.3641 | 1.25 | 1300 | 0.2143 | 0.1830 |
| 0.3707 | 1.35 | 1400 | 0.2034 | 0.1793 |
| 0.3767 | 1.45 | 1500 | 0.2029 | 0.1823 |
| 0.3483 | 1.54 | 1600 | 0.1999 | 0.1740 |
| 0.3577 | 1.64 | 1700 | 0.1928 | 0.1728 |
| 0.3667 | 1.74 | 1800 | 0.1898 | 0.1726 |
| 0.3283 | 1.83 | 1900 | 0.1920 | 0.1688 |
| 0.3571 | 1.93 | 2000 | 0.1904 | 0.1649 |
| 0.3467 | 2.03 | 2100 | 0.1994 | 0.1648 |
| 0.3145 | 2.12 | 2200 | 0.1940 | 0.1682 |
| 0.3186 | 2.22 | 2300 | 0.1879 | 0.1571 |
| 0.3058 | 2.32 | 2400 | 0.1975 | 0.1678 |
| 0.3096 | 2.41 | 2500 | 0.1877 | 0.1589 |
| 0.2964 | 2.51 | 2600 | 0.1862 | 0.1568 |
| 0.3068 | 2.61 | 2700 | 0.1809 | 0.1588 |
| 0.3036 | 2.7 | 2800 | 0.1769 | 0.1573 |
| 0.3084 | 2.8 | 2900 | 0.1836 | 0.1524 |
| 0.3109 | 2.9 | 3000 | 0.1807 | 0.1519 |
| 0.2969 | 2.99 | 3100 | 0.1851 | 0.1516 |
| 0.2698 | 3.09 | 3200 | 0.1737 | 0.1490 |
| 0.2703 | 3.19 | 3300 | 0.1759 | 0.1457 |
| 0.2759 | 3.28 | 3400 | 0.1778 | 0.1471 |
| 0.2728 | 3.38 | 3500 | 0.1717 | 0.1462 |
| 0.2398 | 3.47 | 3600 | 0.1767 | 0.1451 |
| 0.256 | 3.57 | 3700 | 0.1742 | 0.1410 |
| 0.2712 | 3.67 | 3800 | 0.1674 | 0.1414 |
| 0.2648 | 3.76 | 3900 | 0.1717 | 0.1423 |
| 0.2576 | 3.86 | 4000 | 0.1672 | 0.1403 |
| 0.2504 | 3.96 | 4100 | 0.1683 | 0.1381 |
| 0.2406 | 4.05 | 4200 | 0.1685 | 0.1399 |
| 0.2403 | 4.15 | 4300 | 0.1656 | 0.1381 |
| 0.2233 | 4.25 | 4400 | 0.1687 | 0.1371 |
| 0.2546 | 4.34 | 4500 | 0.1642 | 0.1377 |
| 0.2431 | 4.44 | 4600 | 0.1655 | 0.1372 |
| 0.2337 | 4.54 | 4700 | 0.1625 | 0.1370 |
| 0.2607 | 4.63 | 4800 | 0.1618 | 0.1363 |
| 0.2292 | 4.73 | 4900 | 0.1622 | 0.1366 |
| 0.2232 | 4.83 | 5000 | 0.1626 | 0.1365 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lighteternal/SSE-TUC-mt-el-en-lowercase | 043dffcf265eb5362f13231e9ea9757def46bda8 | 2021-03-31T17:26:44.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | lighteternal | null | lighteternal/SSE-TUC-mt-el-en-lowercase | 7 | null | transformers | 14,146 | ---
language:
- en
- el
tags:
- translation
widget:
- text: "Η τύχη βοηθάει τους τολμηρούς."
license: apache-2.0
metrics:
- bleu
---
## Greek to English NMT (lower-case output)
## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* source languages: el
* target languages: en
* licence: apache-2.0
* dataset: Opus, CCmatrix
* model: transformer(fairseq)
* pre-processing: tokenization + BPE segmentation
* metrics: bleu, chrf
* output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-el-en-cased
### Model description
Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\
BPE segmentation (10k codes).\\
Lower-case model.
### How to use
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
mname = " <your_downloaded_model_folderpath_here> "
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
text = "Η τύχη βοηθάει τους τολμηρούς."
encoded = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
for i, output in enumerate(outputs):
i += 1
print(f"{i}: {output.tolist()}")
decoded = tokenizer.decode(output, skip_special_tokens=True)
print(f"{i}: {decoded}")
```
## Training data
Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
## Eval results
Results on Tatoeba testset (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 79.3 | 0.795 |
Results on XNLI parallel (EL-EN):
| BLEU | chrF |
| ------ | ------ |
| 66.2 | 0.623 |
### BibTeX entry and citation info
Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
linkpipi/distilbert-base-uncased-finetuned-sst2 | 4457d4326ffab739a905aa5be8aceaec8655b68d | 2021-11-18T06:33:38.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | linkpipi | null | linkpipi/distilbert-base-uncased-finetuned-sst2 | 7 | null | transformers | 14,147 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.908256880733945
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3263
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.191 | 1.0 | 4210 | 0.3263 | 0.9083 |
| 0.1284 | 2.0 | 8420 | 0.3965 | 0.9025 |
| 0.0833 | 3.0 | 12630 | 0.4239 | 0.9048 |
| 0.0707 | 4.0 | 16840 | 0.4164 | 0.9037 |
| 0.0512 | 5.0 | 21050 | 0.5315 | 0.9037 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
liuchenyang33/bert_cn_finetuning | 2b76da941a8e61050121d2fe9ed312080f442f3e | 2021-05-19T22:02:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | liuchenyang33 | null | liuchenyang33/bert_cn_finetuning | 7 | null | transformers | 14,148 | Entry not found |
liyijing024/covid-misinfo | ee3d74bce35cbcd9dc79a96c3881c7c4a7efef48 | 2021-11-20T19:56:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | liyijing024 | null | liyijing024/covid-misinfo | 7 | null | transformers | 14,149 | Entry not found |
luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_10k | ade345bf7b843d2d33cea16cc7d5463fd0bf4647 | 2021-10-27T12:42:56.000Z | [
"pytorch",
"roberta",
"transformers"
]
| null | false | luffycodes | null | luffycodes/bb_narataka_roberta_large_nli_bsz_16_bb_bsz_16_nli_lr_1e5_bb_lr_1e5_wu_10k | 7 | null | transformers | 14,150 | Entry not found |
luke-thorburn/suggest-objections-soft | 2b78e635a5ff4f4b76299cc8e194d53c453f2d2f | 2022-07-12T09:43:28.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"argumentation",
"license:apache-2.0"
]
| text-generation | false | luke-thorburn | null | luke-thorburn/suggest-objections-soft | 7 | null | transformers | 14,151 | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][original claim]
Cons:
- [objection 1]
- [objection 2]
...
- [objection n]
- [generated objection]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto | 2f69d1e9187480e366d9d495e22221f1563b03ad | 2020-04-24T16:02:31.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | m-polignano-uniba | null | m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto | 7 | null | transformers | 14,152 | Entry not found |
mackseem/distilbert-base-uncased-finetuned-ner | 458d75df5e47971f3d96e586724f93ec4ab5028c | 2022-07-13T21:52:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | mackseem | null | mackseem/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 14,153 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
madlag/bert-large-uncased-mnli | e23dadbe76e9249fd70520e845f6609d54447af2 | 2021-05-19T22:40:43.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | madlag | null | madlag/bert-large-uncased-mnli | 7 | null | transformers | 14,154 | ## BERT-large finetuned on MNLI.
The [reference finetuned model](https://github.com/google-research/bert) has an accuracy of 86.05, we get 86.7:
```
{'eval_loss': 0.3984006643295288, 'eval_accuracy': 0.8667345899133979}
``` |
malay-huggingface/albert-base-bahasa-cased | 10f861bd1e2acc169a3a4a69503be76f3890bd24 | 2021-09-26T12:35:12.000Z | [
"pytorch",
"albert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | malay-huggingface | null | malay-huggingface/albert-base-bahasa-cased | 7 | null | transformers | 14,155 | ---
language: ms
---
# albert-base-bahasa-cased
Pretrained ALBERT base language model for Malay.
## Pretraining Corpus
`albert-base-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/albert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/albert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AlbertTokenizer, AlbertModel
model = AlbertModel.from_pretrained('malay-huggingface/albert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-base-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import AlbertTokenizer, AlbertForMaskedLM, pipeline
model = AlbertForMaskedLM.from_pretrained('malay-huggingface/albert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-base-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
malay-huggingface/t5-super-super-tiny-bahasa-cased | 83377fc13c6e65f0175637a6052c2228ebef3304 | 2021-09-10T13:03:44.000Z | [
"pytorch",
"t5",
"feature-extraction",
"ms",
"transformers"
]
| feature-extraction | false | malay-huggingface | null | malay-huggingface/t5-super-super-tiny-bahasa-cased | 7 | null | transformers | 14,156 | ---
language: ms
---
# t5-super-super-tiny-bahasa-cased
Pretrained T5 super-super-tiny language model for Malay.
## Pretraining Corpus
`t5-super-super-tiny-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity. |
malteos/arqmath-bert-base-cased | 1dfd56b760877f17fd17167e406f023169b9f601 | 2021-05-19T22:47:01.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | malteos | null | malteos/arqmath-bert-base-cased | 7 | null | transformers | 14,157 | Entry not found |
manishiitg/spanbert-recruit-qa | 116072c40f930d1c8bc394cfdf4e79bbbff44e03 | 2021-05-19T22:54:34.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | manishiitg | null | manishiitg/spanbert-recruit-qa | 7 | null | transformers | 14,158 | Entry not found |
manueltonneau/biocovid-bert-large-cased | a0da109fe8ac5eb0eb03f1303c90630d3fdbb233 | 2020-06-03T07:40:46.000Z | [
"pytorch",
"transformers"
]
| null | false | manueltonneau | null | manueltonneau/biocovid-bert-large-cased | 7 | null | transformers | 14,159 | Entry not found |
marcelcastrobr/sagemaker-distilbert-emotion-2 | ea1e6ceb8029dcd9a5c60b7200ece01bd651b818 | 2021-11-19T13:19:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | marcelcastrobr | null | marcelcastrobr/sagemaker-distilbert-emotion-2 | 7 | null | transformers | 14,160 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1442
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9316 | 1.0 | 500 | 0.2384 | 0.918 |
| 0.1849 | 2.0 | 1000 | 0.1599 | 0.9265 |
| 0.1047 | 3.0 | 1500 | 0.1442 | 0.9315 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
marcosgg/bert-base-gl-cased | 83354127383fd94a28cc3435d67ca9ca5dfe07bb | 2021-09-09T10:34:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"gl",
"pt",
"arxiv:2106.13553",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | marcosgg | null | marcosgg/bert-base-gl-cased | 7 | 1 | transformers | 14,161 | ---
language:
- gl
- pt
widget:
- text: "A mesa estaba feita de [MASK]."
---
# BERT for Galician (Base)
This is a base pre-trained BERT model (12 layers, cased) for Galician (ILG/RAG spelling). It was evaluated on lexical semantics tasks, using a [dataset to identify homonymy and synonymy in context](https://github.com/marcospln/homonymy_acl21), and presented at ACL 2021.
There is also a small version (6 layers, cased): `marcosgg/bert-small-gl-cased`
## Citation
If you use this model, please cite the following [paper](https://arxiv.org/abs/2106.13553):
```
@inproceedings{garcia-2021-exploring,
title = "Exploring the Representation of Word Meanings in Context: {A} Case Study on Homonymy and Synonymy",
author = "Garcia, Marcos",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.281",
doi = "10.18653/v1/2021.acl-long.281",
pages = "3625--3640"
}
```
|
maroo93/practice01 | 2f209143acfed203e856f429560fdaf0484a8cdd | 2021-05-19T23:06:33.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | maroo93 | null | maroo93/practice01 | 7 | null | transformers | 14,162 | Entry not found |
masapasa/sagemaker-distilbert-emotion | be07f4d280f8790f500863da22a0fe8ed6673001 | 2021-11-17T20:24:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | masapasa | null | masapasa/sagemaker-distilbert-emotion | 7 | null | transformers | 14,163 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2590
- Accuracy: 0.915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9292 | 1.0 | 500 | 0.2590 | 0.915 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
matheusntg/character-bert-pt-small | c56cbe8016cc429d96c3ccf9197aa81432f4e35a | 2021-06-27T01:12:29.000Z | [
"pytorch",
"character_bert",
"transformers"
]
| null | false | matheusntg | null | matheusntg/character-bert-pt-small | 7 | null | transformers | 14,164 | Entry not found |
maximedb/autonlp-vaccinchat-22134694 | e69a8cf88fb6c6fa1b5c18a43c045b8ec410305c | 2021-10-19T12:50:01.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"nl",
"dataset:maximedb/autonlp-data-vaccinchat",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | maximedb | null | maximedb/autonlp-vaccinchat-22134694 | 7 | null | transformers | 14,165 | ---
tags: autonlp
language: nl
widget:
- text: "I love AutoNLP 🤗"
datasets:
- maximedb/autonlp-data-vaccinchat
co2_eq_emissions: 14.525955245648218
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22134694
- CO2 Emissions (in grams): 14.525955245648218
## Validation Metrics
- Loss: 1.7039562463760376
- Accuracy: 0.6369376479873717
- Macro F1: 0.5363181342408181
- Micro F1: 0.6369376479873717
- Weighted F1: 0.6309793486221543
- Macro Precision: 0.5533353910494714
- Micro Precision: 0.6369376479873717
- Weighted Precision: 0.676981050732216
- Macro Recall: 0.5828723356986293
- Micro Recall: 0.6369376479873717
- Weighted Recall: 0.6369376479873717
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/maximedb/autonlp-vaccinchat-22134694
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
medA/autonlp-FR_another_test-565016091 | c5f7ca051e965d70841fec81085136a8aafb6962 | 2022-02-11T11:08:02.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"dataset:medA/autonlp-data-FR_another_test",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | medA | null | medA/autonlp-FR_another_test-565016091 | 7 | null | transformers | 14,166 | ---
tags: autonlp
language: fr
widget:
- text: "I love AutoNLP 🤗"
datasets:
- medA/autonlp-data-FR_another_test
co2_eq_emissions: 70.54639641012226
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 565016091
- CO2 Emissions (in grams): 70.54639641012226
## Validation Metrics
- Loss: 0.5170354247093201
- Accuracy: 0.8545909432074056
- Macro F1: 0.7910662503820883
- Micro F1: 0.8545909432074056
- Weighted F1: 0.8539837213761081
- Macro Precision: 0.8033640381948799
- Micro Precision: 0.8545909432074056
- Weighted Precision: 0.856160322286008
- Macro Recall: 0.7841845637031052
- Micro Recall: 0.8545909432074056
- Weighted Recall: 0.8545909432074056
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/medA/autonlp-FR_another_test-565016091
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
megagonlabs/optimus-yelp | 5effa185646ac2111e1b5dbe6daaf2c6081a0490 | 2021-09-11T00:16:32.000Z | [
"pytorch",
"en",
"transformers",
"summarization",
"license:bsd-3-clause"
]
| summarization | false | megagonlabs | null | megagonlabs/optimus-yelp | 7 | null | transformers | 14,167 | ---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## Optimus model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
michaelrglass/albert-base-rci-wtq-col | 5998a0755b7e0b9589d626bd0105b33cdf9e2459 | 2021-06-16T16:03:50.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | michaelrglass | null | michaelrglass/albert-base-rci-wtq-col | 7 | null | transformers | 14,168 | Entry not found |
mmcquade11/autonlp-reuters-summarization-34018133 | 8be0083aeef0566c8a2d76fa0cf5e4b6e65efd28 | 2021-11-19T14:45:38.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:mmcquade11/autonlp-data-reuters-summarization",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | mmcquade11 | null | mmcquade11/autonlp-reuters-summarization-34018133 | 7 | null | transformers | 14,169 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- mmcquade11/autonlp-data-reuters-summarization
co2_eq_emissions: 286.4350821612984
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
``` |
morenolq/SumTO_FNS2020 | 85984f55df8284649b1d3403accf12a005f9cefc | 2021-05-20T00:12:45.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | morenolq | null | morenolq/SumTO_FNS2020 | 7 | null | transformers | 14,170 | This is the *best performing* model used in the paper: "End-to-end Training For Financial Report Summarization"
https://www.aclweb.org/anthology/2020.fnp-1.20/ |
moshew/minylm-L3-aug-sst2-distilled | 3838e96a5947e8607e6e3952110b63e0fcd881ff | 2022-02-24T09:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | moshew | null | moshew/minylm-L3-aug-sst2-distilled | 7 | null | transformers | 14,171 | {'test_accuracy': 0.911697247706422,
'test_loss': 0.24090610444545746,
'test_runtime': 0.4372,
'test_samples_per_second': 1994.475,
'test_steps_per_second': 16.011} |
moyix/csrc_774m | 71c51c0c94a3a7be5cfca0182526acdaa4e93853 | 2021-09-23T16:12:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"multilingual",
"transformers",
"programming",
"causal-lm",
"license:cc0-1.0"
]
| text-generation | false | moyix | null | moyix/csrc_774m | 7 | null | transformers | 14,172 | ---
language: multilingual
thumbnail: https://doesnotexist.codes/messlab.png
tags:
- programming
- gpt2
- causal-lm
license: cc0-1.0
---
# GPT-CSRC
This is a GPT2 774M model trained on the C/C++ code of the top 10,000 most popular packages in Debian, according to the [Debian Popularity Contest](https://popcon.debian.org/). The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates). The model was originally trained using [NVIDIA's Megatron-LM](https://github.com/nvidia/Megatron-LM) but has been converted to Huggingface. Note that the tokenizer is *not* the standard GPT2 BPE vocab, but one that has been trained for this dataset; the tokenizer is also available from this repository.
The processed dataset (in JSON format) can be found here: [csrc\_dataset\_large.json.gz](https://moyix.net/~moyix/csrc_dataset_large.json.gz).
This model was used to generate snippets for the web site [This Code Does Not Exist](https://doesnotexist.codes/).
# Usage
```
>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> model = AutoModelForCausalLM.from_pretrained("moyix/csrc_774m")
>>> device = torch.device("cuda")
>>> model.to(device)
>>> tokenizer = AutoTokenizer.from_pretrained("moyix/csrc_774m")
>>> prompt = tokenizer.encode('// say hello\nvoid hello() {', return_tensors="pt")
>>> output = model.generate(input_ids=prompt.to(device), max_length=32, num_return_sequences=1, do_sample=True, num_beams=4)
>>> print(tokenizer.decode(output[0].tolist(),clean_up_tokenization_spaces=True))
// say hello
void hello() {
std::cout << "hello" << std::endl;
}
int main() {
``` |
mrm8488/RoBasquERTa | 16b2188473688a77c208a1c34de625d23a9b1bc6 | 2021-05-20T18:05:08.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"eu",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | mrm8488 | null | mrm8488/RoBasquERTa | 7 | null | transformers | 14,173 | ---
language: eu
widget:
- text: "Euskara da Euskal Herriko <mask> ofiziala"
- text: "Gaur egun, Euskadik Espainia osoko ekonomia <mask> du"
---
# RoBasquERTa: RoBERTa-like Language model trained on OSCAR Basque corpus
|
mrm8488/bert-small-2-finetuned-squadv2 | e8b1a7b4639c89d4e88e3f0f6c7e71b20461de69 | 2021-05-20T00:32:45.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/bert-small-2-finetuned-squadv2 | 7 | null | transformers | 14,174 | Entry not found |
mrm8488/bert-small2bert-small_shared-finetuned-wikisql | f3ee0b019321f067819545b5cf8f8426783494c7 | 2020-11-12T23:43:15.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/bert-small2bert-small_shared-finetuned-wikisql | 7 | null | transformers | 14,175 | Entry not found |
mrm8488/bert-tiny-4-finetuned-squadv2 | fd13f8f03e48339846a50e76116726150e986963 | 2021-05-20T00:39:36.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | mrm8488 | null | mrm8488/bert-tiny-4-finetuned-squadv2 | 7 | null | transformers | 14,176 | Entry not found |
mrm8488/wav2vec2-large-xlsr-53-ukrainian | 91450763b57cea8b229a605e7e77a2504b74decf | 2021-07-06T13:20:13.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | mrm8488 | null | mrm8488/wav2vec2-large-xlsr-53-ukrainian | 7 | null | transformers | 14,177 | ---
language: uk
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Ukrainian Manuel Romero
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice uk
type: common_voice
args: uk
metrics:
- name: Test WER
type: wer
value: 41.82
---
# Wav2Vec2-Large-XLSR-53-ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-ukrainian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.82 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
msakthiganesh/TabQGen-Small | 59ae31fc5fafad22a2ffede5f9959fcad435672b | 2021-08-18T14:37:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | msakthiganesh | null | msakthiganesh/TabQGen-Small | 7 | null | transformers | 14,178 | > **TabQGen** model is released along with the dataset **Question Generation for Tables** in the paper - **Answer-Aware Question Generation from Tabular and Textual Data using T5** |
muhtasham/autonlp-Doctor_DE-24595544 | 6330780c5cd857927ce56e10384e11d63710c751 | 2021-10-22T10:51:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"de",
"dataset:muhtasham/autonlp-data-Doctor_DE",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | muhtasham | null | muhtasham/autonlp-Doctor_DE-24595544 | 7 | null | transformers | 14,179 | ---
tags: autonlp
language: de
widget:
- text: "I love AutoNLP 🤗"
datasets:
- muhtasham/autonlp-data-Doctor_DE
co2_eq_emissions: 92.87363201770962
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 24595544
- CO2 Emissions (in grams): 92.87363201770962
## Validation Metrics
- Loss: 0.3001164197921753
- MSE: 0.3001164197921753
- MAE: 0.24272102117538452
- R2: 0.8465975006681247
- RMSE: 0.5478288531303406
- Explained Variance: 0.8468209505081177
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/muhtasham/autonlp-Doctor_DE-24595544
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("muhtasham/autonlp-Doctor_DE-24595544", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
mvonwyl/distilbert-base-uncased-finetuned-squad2 | 916458d4966367cca235fa26977b66106c1ca2f2 | 2021-11-02T19:29:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | mvonwyl | null | mvonwyl/distilbert-base-uncased-finetuned-squad2 | 7 | 1 | transformers | 14,180 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2023 | 1.0 | 8157 | 1.2166 |
| 0.9409 | 2.0 | 16314 | 1.3350 |
| 0.7472 | 3.0 | 24471 | 1.4218 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mwesner/reformer-clm | 0e32b4721caa6f841f945786bec4b2dc78f3855e | 2021-09-05T13:44:41.000Z | [
"pytorch",
"reformer",
"text-generation",
"transformers",
"model-index"
]
| text-generation | false | mwesner | null | mwesner/reformer-clm | 7 | null | transformers | 14,181 | ---
model-index:
- name: reformer-clm
---
## reformer-clm
This casual language model was trained from scratch on CNN/Dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.8321 | 1.0 | 18412 | 3.8074 |
| 3.4965 | 2.0 | 36824 | 3.4223 |
| 3.1927 | 3.0 | 55236 | 3.0815 |
| 3.046 | 4.0 | 73648 | 2.9270 |
| 2.9781 | 5.0 | 92060 | 2.8515 |
| 2.9398 | 6.0 | 110472 | 2.8082 |
| 2.9293 | 7.0 | 128884 | 2.7904 |
| 2.9212 | 8.0 | 147296 | 2.7817 |
| 2.9169 | 9.0 | 165708 | 2.7787 |
| 2.9197 | 10.0 | 184120 | 2.7783 |
### Framework versions
- Transformers 4.6.1
- Pytorch 1.9.0
- Datasets 1.2.1
- Tokenizers 0.10.3
|
nielsr/detr-resnet-50 | 05dcd30831b61ef54f178c9cd63f490dc7e02700 | 2021-06-08T13:56:54.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
]
| object-detection | false | nielsr | null | nielsr/detr-resnet-50 | 7 | null | transformers | 14,182 | Entry not found |
norirahul/SMSTransformer | 106320210f55e02f24d5c49a00b9c12395766399 | 2021-11-04T18:19:21.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | norirahul | null | norirahul/SMSTransformer | 7 | null | transformers | 14,183 | Entry not found |
nouamanetazi/cover-letter-gpt2 | 04e2676216105d11c8ffa90034820be33d2e829b | 2021-11-25T01:18:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | nouamanetazi | null | nouamanetazi/cover-letter-gpt2 | 7 | 1 | transformers | 14,184 | Entry not found |
nurkayevaa/autonlp-bert-covid-407910467 | c1db9e60bf8efe1a199aed39b5fa63f3537e3ff6 | 2021-12-11T05:31:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:nurkayevaa/autonlp-data-bert-covid",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | nurkayevaa | null | nurkayevaa/autonlp-bert-covid-407910467 | 7 | null | transformers | 14,185 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- nurkayevaa/autonlp-data-bert-covid
co2_eq_emissions: 10.719439124704492
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 407910467
- CO2 Emissions (in grams): 10.719439124704492
## Validation Metrics
- Loss: 0.12029844522476196
- Accuracy: 0.9516339869281045
- Precision: 0.9477786438035853
- Recall: 0.9650793650793651
- AUC: 0.9907376734912967
- F1: 0.9563507668108534
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/nurkayevaa/autonlp-bert-covid-407910467
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nurkayevaa/autonlp-bert-covid-407910467", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nurkayevaa/autonlp-bert-covid-407910467", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
obss/mt5-base-3task-highlight-tquad2 | 03b01259cdf1c87c7824e6c9121992b6332140a3 | 2021-12-03T23:49:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
]
| text2text-generation | false | obss | null | obss/mt5-base-3task-highlight-tquad2 | 7 | null | transformers | 14,186 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "generate question: Fatih Sultan Mehmet, Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da <hl> bir antlaşma yaparak <hl> Venedik'le 16 yıllık savaşa son verdi."
example_title: "Question Generation (History)"
- text: "generate question: Cenevizlilerin önemli üslerinden Amasra’yı aldı. 1479’da bir antlaşma yaparak <hl> Venedik'le <hl> 16 yıllık savaşa sona verdi."
example_title: "Question Generation (History 2)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Sunulan yöntemle ne yapılabilir? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
license: cc-by-4.0
---
# mt5-base for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-tquad2')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-base
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-base"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-tquad2')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
``` |
obss/mt5-small-3task-highlight-tquad2 | 030833b0f378afbb88130ed124ae20a50f932863 | 2021-12-03T23:48:47.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
]
| text2text-generation | false | obss | null | obss/mt5-small-3task-highlight-tquad2 | 7 | null | transformers | 14,187 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "generate question: Fatih Sultan Mehmet, Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da <hl> bir antlaşma yaparak Venedik'le 16 yıllık savaşa son verdi."
example_title: "Question Generation (History)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Answer Extraction (Open Domain)"
license: cc-by-4.0
---
# mt5-small for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-highlight-tquad2')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-small
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-small"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-highlight-tquad2')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
```
|
oferweintraub/bert-base-finance-sentiment-noisy-search | 570ae1d10cfad341b8c0607097a00b3a4a838f59 | 2022-03-31T14:13:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"Finance-sentiment-analysis",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | oferweintraub | null | oferweintraub/bert-base-finance-sentiment-noisy-search | 7 | 1 | transformers | 14,188 | ---
license: apache-2.0
tags:
- Finance-sentiment-analysis
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: bert-base-finance-sentiment-noisy-search
results: []
widget:
- text: "Third quarter reported revenues were $10.9 billion, up 5 percent compared to prior year and up 8 percent on a currency-neutral basis"
example_title: "Positive"
- text: "The London-listed website for businesses reported a pretax loss of $26.6 million compared with a loss of $12.9 million the previous year"
example_title: "Negative"
- text: "Microsoft updates Outlook, Teams, and PowerPoint to be hybrid work ready"
example_title: "Neutral"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finance-sentiment-noisy-search
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on Kaggle finance news sentiment analysis with data enhancement using noisy search. The process is explained below:
1. First "bert-base-uncased" was fine-tuned on Kaggle's finance news sentiment analysis https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news dataset achieving accuracy of about 88%
2. We then used a logistic-regression classifier on the same data. Here we looked at coefficients that contributed the most to the "Positive" and "Negative" classes by inspecting only bi-grams.
3. Using the top 25 bi-grams per class (i.e. "Positive" / "Negative") we invoked Bing news search with those bi-grams and retrieved up to 50 news items per bi-gram phrase.
4. We called it "noisy-search" because it is assumed the positive bi-grams (e.g. "profit rose" , "growth net") give rise to positive examples whereas negative bi-grams (e.g. "loss increase", "share loss") result in negative examples but note that we didn't test for the validity of this assumption (hence: noisy-search)
5. For each article we kept the title + excerpt and labeled it according to pre-assumptions on class associations.
6. We then trained the same model on the noisy data and apply it to an held-out test set from the original data set split.
7. Training with couple of thousands noisy "positives" and "negatives" examples yielded a test set accuracy of about 95%.
8. It shows that by automatically collecting noisy examples using search we can boost accuracy performance from about 88% to more than 95%.
Accuracy results for Logistic Regression (LR) and BERT (base-cased) are shown in the attached pdf:
https://drive.google.com/file/d/1MI9gRdppactVZ_XvhCwvoaOV1aRfprrd/view?usp=sharing
## Model description
BERT model trained on noisy data from search results. See PDF for more details.
## Intended uses & limitations
Intended for use on finance news sentiment analysis with 3 options: "Positive", "Neutral" and "Negative"
To get the best results feed the classifier with the title and either the 1st paragraph or a short news summarization e.g. of up to 64 tokens.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
orzhan/bart-transcription-aggregation | dc194adf5bfd70504bbd0896572e6e348e19098e | 2022-06-11T07:22:21.000Z | [
"pytorch",
"bart",
"text2text-generation",
"ru",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | orzhan | null | orzhan/bart-transcription-aggregation | 7 | null | transformers | 14,189 | ---
language: ru
---
BART model fine-tuned to aggregate crowd-sourced transcriptions.
Repository: [GitHub](https://github.com/orzhan/bart-transcription-aggregation) |
pablouribe/bertstem-copus-supercategories | 54f1db39a917e6cdaaba916f0e1b448c037769fb | 2022-02-01T06:00:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | pablouribe | null | pablouribe/bertstem-copus-supercategories | 7 | null | transformers | 14,190 | Entry not found |
patrickvonplaten/distilhubert-timit | b4bf85616fe4d219d41ce82f55c92a659a8c7672 | 2021-10-28T00:32:57.000Z | [
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/distilhubert-timit | 7 | null | transformers | 14,191 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: distilhubert-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-timit
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3601
- Wer: 0.6776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.4447 | 0.69 | 100 | 4.9546 | 1.0 |
| 2.9499 | 1.38 | 200 | 2.9519 | 1.0 |
| 2.8989 | 2.07 | 300 | 2.8624 | 1.0 |
| 2.2076 | 2.76 | 400 | 2.1089 | 1.0008 |
| 1.4186 | 3.45 | 500 | 1.4112 | 0.9165 |
| 0.9951 | 4.14 | 600 | 1.1378 | 0.7701 |
| 0.9754 | 4.83 | 700 | 1.0152 | 0.7274 |
| 0.9364 | 5.52 | 800 | 0.9619 | 0.7011 |
| 0.6557 | 6.21 | 900 | 0.9144 | 0.6868 |
| 0.5681 | 6.9 | 1000 | 0.8899 | 0.6683 |
| 0.66 | 7.59 | 1100 | 0.8992 | 0.6654 |
| 0.6144 | 8.28 | 1200 | 0.9299 | 0.6898 |
| 0.4099 | 8.97 | 1300 | 0.9510 | 0.6674 |
| 0.3384 | 9.66 | 1400 | 0.9598 | 0.6612 |
| 0.3163 | 10.34 | 1500 | 0.9954 | 0.6612 |
| 0.4204 | 11.03 | 1600 | 1.0164 | 0.6607 |
| 0.1932 | 11.72 | 1700 | 1.0637 | 0.6658 |
| 0.1449 | 12.41 | 1800 | 1.1190 | 0.6652 |
| 0.1803 | 13.1 | 1900 | 1.1260 | 0.6689 |
| 0.328 | 13.79 | 2000 | 1.2186 | 0.6751 |
| 0.0838 | 14.48 | 2100 | 1.2591 | 0.6909 |
| 0.0766 | 15.17 | 2200 | 1.2529 | 0.6780 |
| 0.0956 | 15.86 | 2300 | 1.2537 | 0.6668 |
| 0.2339 | 16.55 | 2400 | 1.3210 | 0.6797 |
| 0.0431 | 17.24 | 2500 | 1.3241 | 0.6781 |
| 0.0508 | 17.93 | 2600 | 1.3184 | 0.6683 |
| 0.0616 | 18.62 | 2700 | 1.3728 | 0.6889 |
| 0.1608 | 19.31 | 2800 | 1.3572 | 0.6771 |
| 0.0378 | 20.0 | 2900 | 1.3601 | 0.6776 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-common_voice-tr-demo-dist | 45206a11c08415e272b5714542768f23e6903f7a | 2021-12-20T12:54:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"speech-recognition",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-common_voice-tr-demo-dist | 7 | 1 | transformers | 14,192 | ---
language:
- tr
license: apache-2.0
tags:
- speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo-dist
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- Wer: 0.3581
- Cer: 0.0805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- num_gpus: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7391 | 0.92 | 100 | 3.5760 | 1.0 |
| 2.927 | 1.83 | 200 | 3.0796 | 0.9999 |
| 0.9009 | 2.75 | 300 | 0.9278 | 0.8226 |
| 0.6529 | 3.67 | 400 | 0.5926 | 0.6367 |
| 0.3623 | 4.59 | 500 | 0.5372 | 0.5692 |
| 0.2888 | 5.5 | 600 | 0.4407 | 0.4838 |
| 0.285 | 6.42 | 700 | 0.4341 | 0.4694 |
| 0.0842 | 7.34 | 800 | 0.4153 | 0.4302 |
| 0.1415 | 8.26 | 900 | 0.4317 | 0.4136 |
| 0.1552 | 9.17 | 1000 | 0.4145 | 0.4013 |
| 0.1184 | 10.09 | 1100 | 0.4115 | 0.3844 |
| 0.0556 | 11.01 | 1200 | 0.4182 | 0.3862 |
| 0.0851 | 11.93 | 1300 | 0.3985 | 0.3688 |
| 0.0961 | 12.84 | 1400 | 0.4030 | 0.3665 |
| 0.0596 | 13.76 | 1500 | 0.3880 | 0.3631 |
| 0.0359 | 14.68 | 1600 | 0.3878 | 0.3589 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xls-r-phoneme-300m-sv | 54e90f03f604b5231ec29b0a218008c4304e728c | 2021-12-10T18:14:51.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"sv",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-xls-r-phoneme-300m-sv | 7 | 1 | transformers | 14,193 | ---
language:
- sv
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-phoneme-300m-sv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-xls-r-phoneme-300m-sv
**Note**: The tokenizer was created from the official Swedish phoneme vocabulary as defined here: https://github.com/microsoft/UniSpeech/blob/main/UniSpeech/examples/unispeech/data/sv/phonesMatches_reduced.json
One can simply download the file, rename it to `vocab.json` and load a `Wav2Vec2PhonemeCTCTokenizer.from_pretrained("./directory/with/vocab.json/")`.
This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9707
- PER: 0.2215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
See Tensorboard traces
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.1
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xls-r-phoneme-300m-tr | 04b8ab164da2249f6dc1181632dda420ff5a2cdd | 2021-12-10T18:10:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-xls-r-phoneme-300m-tr | 7 | 1 | transformers | 14,194 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-phoneme-300m-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-xls-r-phoneme-300m-tr
This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6380
- PER: 0.1664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | PER |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 13.6687 | 0.92 | 100 | 12.4567 | 1.0 |
| 3.4219 | 1.83 | 200 | 3.4704 | 1.0 |
| 3.1846 | 2.75 | 300 | 3.2281 | 0.9935 |
| 2.0076 | 3.67 | 400 | 1.7415 | 0.5222 |
| 1.0244 | 4.59 | 500 | 1.0290 | 0.3323 |
| 0.7095 | 5.5 | 600 | 0.8424 | 0.2859 |
| 0.619 | 6.42 | 700 | 0.7389 | 0.2232 |
| 0.3541 | 7.34 | 800 | 0.7049 | 0.2043 |
| 0.2946 | 8.26 | 900 | 0.7065 | 0.2153 |
| 0.2868 | 9.17 | 1000 | 0.6840 | 0.2115 |
| 0.2245 | 10.09 | 1100 | 0.6714 | 0.1952 |
| 0.1394 | 11.01 | 1200 | 0.6864 | 0.1954 |
| 0.1288 | 11.93 | 1300 | 0.6696 | 0.2017 |
| 0.1568 | 12.84 | 1400 | 0.6468 | 0.1843 |
| 0.1269 | 13.76 | 1500 | 0.6736 | 0.1965 |
| 0.1101 | 14.68 | 1600 | 0.6689 | 0.1915 |
| 0.1388 | 15.6 | 1700 | 0.6690 | 0.1782 |
| 0.0739 | 16.51 | 1800 | 0.6364 | 0.1734 |
| 0.0897 | 17.43 | 1900 | 0.6480 | 0.1748 |
| 0.0795 | 18.35 | 2000 | 0.6356 | 0.1695 |
| 0.0823 | 19.27 | 2100 | 0.6382 | 0.1685 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.8.1
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
pertschuk/albert-base-squad-classifier-ms | 663fc9e2bd526913a0148e9cc684c48017e744e5 | 2020-04-24T16:05:01.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | pertschuk | null | pertschuk/albert-base-squad-classifier-ms | 7 | null | transformers | 14,195 | Entry not found |
pertschuk/albert-base-squad-classifier | 89541a8e1866d241941870bfa2ea1ee514a0a6a0 | 2020-04-24T16:05:03.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | pertschuk | null | pertschuk/albert-base-squad-classifier | 7 | null | transformers | 14,196 | Entry not found |
piEsposito/braquad-bert-qna | e246fa43bb8060ee503002c5c3707dac14bb5b71 | 2021-05-20T02:42:18.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"pt-br",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | piEsposito | null | piEsposito/braquad-bert-qna | 7 | 1 | transformers | 14,197 | ---
language:
- pt-br
tags:
- question-answering
license: apache-2.0
pipeline_tag: question-answering
metrics:
- em
- f1
---
# BraQuAD BERT
## Model description
This is a question-answering model trained in BraQuAD 2.0, a version of SQuAD 2.0 translated to PT-BR using Google Cloud Translation API.
### Context
Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.
Em 1969, formou-se como engenheira de eletricidade, permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras.
### Examples:
1-Alem do Patinho feio qual outro projeto edith trabalhou? Answer: G10
2-Quantas mulheres entraram na Poli em 1965? Answer: 12
3-Qual grande projeto edith trabalhou? Answer: do primeiro computador brasileiro
4-Qual o primeiro computador brasileiro? Answer: Patinho Feio
## Expected results
As for an example, let's show a context and some questions you can ask, as well as the expected responses. This QnA pairs were not part of the training dataset.
#### How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
import torch
mname = "piEsposito/braquad-bert-qna"
model = AutoModelForQuestionAnswering.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
context = """Edith Ranzini (São Paulo,[1] 1946) é uma engenheira brasileira formada pela USP, professora doutora da Pontifícia Universidade Católica de São Paulo[2] e professora sênior da Escola Politécnica da Universidade de São Paulo (Poli).[3] Ela compôs a equipe responsável pela criação do primeiro computador brasileiro, o Patinho Feio,[1] em 1972, e participou do grupo de instituidores da Fundação para o Desenvolvimento Tecnológico da Engenharia, sendo a única mulher do mesmo.[4][2] Atua nas áreas de inteligência artificial, engenharia de computação, redes neurais e sistemas gráficos.
Na sua época de prestar o vestibular, inscreveu-se para física na USP e para engenharia na Poli-USP,[3] sendo aprovada nesta última em 1965, ingressando como uma das 12 mulheres do total de 360 calouros.[5]
Em 1969, formou-se como engenheira de eletricidade,[2][3] permanecendo na universidade para fazer sua pós-graduação. Nessa época entrou para o Laboratório de Sistemas Digitais (LSD),atual Departamento de Engenharia de Computação e Sistemas Digitais, criado pelo professor Antônio Hélio Guerra Vieira.[3] Em 1970, deu início ao seu mestrado em Engenharia de Sistemas pela USP, concluindo o mesmo em 1975.[2] Nesse período, permaneceu no LSD e fez parte do grupo responsável pelo desenvolvimento do primeiro computador brasileiro, o Patinho Feio (1971-1972) e do G10 (1973-1975), primeiro computador brasileiro de médio porte, feito para o Grupo de trabalho Especial (GTE), posteriormente Digibras."""
# you can try this for all the examples above.
question = 'Qual grande projeto edith trabalhou?'
string = f"[CLS] {question} [SEP] {context} [SEP]"
as_tensor = torch.Tensor(tokenizer.encode(string)).unsqueeze(0)
starts, ends = model(as_tensor.long())
s, e = torch.argmax(starts[0]), torch.argmax(ends[0])
print(tokenizer.decode(tokenizer.encode(string)[s:e+1])) # 'do primeiro computador brasileiro'
```
#### Limitations and bias
- The model is trained on a dataset translated using Google Cloud API. Due to that, there are some issues with the labels, in some cases, not being identic to the answers. Due to that, the performance cannot reach the level it does with english, handly curated models. Anyway, it is a good progresso towards QnA in PT-BR.
## Training data
[BraQuAD dataset](https://github.com/piEsposito/br-quad-2.0).
## Training procedure
## Eval results
EM | F1
-------|---------
0.62 | 0.69
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={BraQuAD - Dataset para Question Answering em PT-BR},
author={Esposito, Wladimir and Esposito, Piero and Tamais, Ana},
}
```
|
profoz/mlops-demo | 860cee26a30e3d9bb10f515998a6f64c03c45c4c | 2022-04-08T13:56:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"classification",
"sequence-classification",
"license:apache-2.0"
]
| text-classification | false | profoz | null | profoz/mlops-demo | 7 | null | transformers | 14,198 | ---
language:
- en
tags:
- classification
- sequence-classification
license: apache-2.0
---
Github repository [here](https://github.com/sinanuozdemir/oreilly-transformers-nlp) |
qarib/bert-base-qarib_far_9920k | d831feb7887c0703f8439d8193a23611362da11e | 2021-04-21T13:38:28.000Z | [
"pytorch",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"transformers",
"tf",
"QARiB",
"qarib"
]
| null | false | qarib | null | qarib/bert-base-qarib_far_9920k | 7 | null | transformers | 14,199 | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
[
]
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
[
]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[
]
```
## Evaluations:
|**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**|
|---------------|---------|--------------|--------------|--------------|---------|
|Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** |
|Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** |
|Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% |
|Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** |
|Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% |
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.